C# Download File from a network drive - c#

I have a file that is located at a network drive. The user access is already created to have full access to the path. But it seems that when I ran the following code to get the file, the browser just does not respond.
FileInfo file = new FileInfo(GetDocumentUploadFolder(ID) + fileName);
// Checking if file exists
if (file.Exists)
{
// Clear the content of the response
this.Page.Response.ClearContent();
// Clear the header of the response
this.Page.Response.ClearHeaders();
// Set the ContentType
this.Page.Response.ContentType = "application/pdf";
// Write the file into the response (TransmitFile is for ASP.NET 2.0. In ASP.NET 1.1 you have to use WriteFile instead)
this.Page.Response.WriteFile(file.FullName);
// End the response
this.Page.Response.End();
}
I tried using this.Page.Response.TransmitFile(file.FullName); and it also does not work. The page seems to stop functioning after this.Page.Response.End();
Any ideas?

No matter where file a stored. Your action must return a file as the result:
public FileResult GetBytes()
{
string path = Server.MapPath("~/Files/PDFIcon.pdf");
byte[] mas = System.IO.File.ReadAllBytes(path);
string file_type = "application/pdf";
string file_name = "PDFIcon.pdf";
return File(mas, file_type, file_name);
}
Server.MapPath(filePath string) - must have access to the file.

I am able to do a workaround by copying the files first from the network drive to local path and then do a TransmitFile from there:
FileInfo file = new FileInfo(GetDocumentUploadFolder(ID) + fileName);
string strFolder = Server.MapPath(LocalLocation);
string strDestination = Server.MapPath(LocalLocation + "\\" + fileName);
// Checking if file exists
if (file.Exists)
{
if (!Directory.Exists(strFolder))
Directory.CreateDirectory(strFolder);
// Delete contents in this folder
Common.DeleteFiles(strFolder, "*.*");
file.CopyTo(strDestination, true);
// Clear the content of the response
this.Page.Response.ClearContent();
// Clear the header of the response
this.Page.Response.ClearHeaders();
// Set the ContentType
this.Page.Response.ContentType = "application/pdf";
// Write the file into the response (TransmitFile is for ASP.NET 2.0. In ASP.NET 1.1 you have to use WriteFile instead)
this.Page.Response.TransmitFile(strDestination);
// End the response
this.Page.Response.End();
}

Related

FtpWebRequest - Transferring file name containing fragment marker (#)

EDIT: Not sure its the answer but it is a workaround... rather than looking at the file transfer objects I added .Replace to the FTP string and got the result I was looking for, the target file name is now matching the source file name.
FtpWebRequest ftpRequest = (FtpWebRequest)WebRequest.Create("ftp://" + targetServer + targetPath + fileInfo.Name.Replace("#", "%23"));
I have an existing C# FTP process that has been in use for years. A new file naming convention was implemented that uses the # character in the actual file name. From what I can tell the # is being interpreted as a fragment marker during the file transfer resulting in an incorrect file name on the target server.
Source file name: '9300T_#Test.xml'
Target file name: '9300T_'
Is there a way to force the actual file name to be used?
When I view object values during execution I can see the original string is correct but I also see the '#Test.xml' under the Fragment property.
I've experimented with different properties of WebRequest, FtpWebRequest, and Uri. So far I have not found a combination that works and have not found a solution on the web.
I've tested using other FTP clients (DOS prompt, Mozilla) and the file is transferred correctly which leads me to believe the solution is property driven, or, it is a limitation to the objects I'm using.
Below is the code I'm testing from a windows form which produces the problem.
Thanks.
string sourcePath = #"C:\FILES\";
string sourcePattern = "9300T*.xml";
string targetServer = "test_server_name";
string targetPath = "/";
string targetLogin = "server_login";
string targetPassword = "login_password";
string[] uploadFiles = Directory.GetFiles(sourcePath, sourcePattern);
// Loop through and process the list.
foreach (string file in uploadFiles)
{
// Create the file information object.
FileInfo fileInfo = new FileInfo(file);
// Create the FTP request object.
FtpWebRequest ftpRequest = (FtpWebRequest)WebRequest.Create("ftp://" + targetServer + targetPath + fileInfo.Name);
ftpRequest.Credentials = new NetworkCredential(targetLogin, targetPassword);
ftpRequest.KeepAlive = false;
ftpRequest.Method = WebRequestMethods.Ftp.UploadFile;
ftpRequest.UseBinary = true;
ftpRequest.UsePassive = true;
ftpRequest.ContentLength = fileInfo.Length;
// Opens a file stream to read the file being uploaded.
FileStream readStream = fileInfo.OpenRead();
// Create the stream to which the upload file is written to.
Stream writeStream = ftpRequest.GetRequestStream(); // -- TARGET FILE IS CREATED WITH WRONG NAME AT THIS POINT
// Set the buffer size.
int bufferLength = 2048;
byte[] buffer = new byte[bufferLength];
// Read from and write to streams until content ends.
int contentLength = readStream.Read(buffer, 0, bufferLength);
while (contentLength != 0)
{
writeStream.Write(buffer, 0, contentLength);
contentLength = readStream.Read(buffer, 0, bufferLength);
}
// Flush and close the streams.
readStream.Flush();
readStream.Close();
writeStream.Flush();
writeStream.Close();
fileInfo.Delete();
}
When you are concatenating file name to create URL for web request,
you have to escape file name using percent-enconding, otherwise characters after "#" are considered to be URL fragment instead of being part of the file name.
I suggest to use function like Uri.EscapeDataString() Instead of Replace("#", "%23"), because it properly handles all other reserved characters (such as '#').
FtpWebRequest ftpRequest = (FtpWebRequest)WebRequest.Create("ftp://" + targetServer + targetPath + Uri.EscapeDataString(fileInfo.Name));
If targetPath could contain reserved characters, you may need to escape it too.

How to create and save a temporary file on Microsoft Azure virtual server

I am using a free MS Azure virtual webserver for my site.
On my dev machine I can successfully create a CSV file, save it to a relative temp directory, and then download it to the browser client.
However, when I run it from the Azure site, I get the following error:
System.IO.DirectoryNotFoundException: Could not find a part of the
path 'D:\home\site\wwwroot\temp\somefile.csv'.
Does the free version of Azure Websites block us from saving files to disk? If not, where are we allowed to create/save files that we generate on the fly?
Code Example
private FilePathResult SaveVolunteersToCsvFile(List<Volunteer> volunteers)
{
string virtualPathToDirectory = "~/temp";
string physicalPathToDirectory = Server.MapPath(virtualPathToDirectory);
string fileName = "Volunteers.csv";
string pathToFile = Path.Combine(physicalPathToDirectory, fileName);
StringBuilder sb = new StringBuilder();
// Column Headers
sb.AppendLine("First Name,Last Name,Phone,Email,Approved,Has Background Check");
// CSV Rows
foreach (var volunteer in volunteers)
{
sb.AppendLine(string.Format("{0},{1},{2},{3},{4},{5},{6}",
volunteer.FirstName, volunteer.LastName, volunteer.MobilePhone.FormatPhoneNumber(), volunteer.EmailAddress, volunteer.IsApproved, volunteer.HasBackgroundCheckOnFile));
}
using (StreamWriter outfile = new StreamWriter(pathToFile))
{
outfile.Write(sb.ToString());
}
return File(Server.MapPath(virtualPathToDirectory + "/" + fileName), "text/csv", fileName);
}
Make sure that the ~/temp folder gets published to the server, as it's possible your publish process isn't including it.
Azure Websites provide environment variables that you can use to get to things like a temporary storage folder. For example, there is a "TEMP" variable you could access to get a path to the TEMP folder specific to your Website.
Change line 2 in your method to this:
//string physicalPathToDirectory = Server.MapPath(virtualPathToDirectory);
string physicalPathToDirectory = Environment.GetEnvironmentVariable("TEMP");
Then change the last line to this:
//return File(Server.MapPath(virtualPathToDirectory + "/" + fileName), "text/csv", fileName);
return File(pathToFile, "text/csv", fileName);

Download file in chunks

I've got the problem as below:
There is some SOAP web service which allows to read stream files. I need to read the whole file divided to chunks and transmit to user. All actions should do not block UI main thread: user presses 'Save' button on save file dialog, and is able to move on to the next page or perform another action. I will be grateful for the sample solution. Note that the solution should work with IIS 5.1.
Regards,
Jimmy
Downloading a file in ASP.NET byte-by-byte to the response page. check at msdn about this:
try
{
System.String filename = "C:\\downloadJSP\\myFile.txt";
// set the http content type to "APPLICATION/OCTET-STREAM
Response.ContentType = "APPLICATION/OCTET-STREAM";
// initialize the http content-disposition header to
// indicate a file attachment with the default filename
// "myFile.txt"
System.String disHeader = "Attachment;
Filename=\"myFile.txt\"";
Response.AppendHeader("Content-Disposition", disHeader);
// transfer the file byte-by-byte to the response object
System.IO.FileInfo fileToDownload = new
System.IO.FileInfo(filename);
System.IO.FileStream fileInputStream = new
System.IO.FileStream(fileToDownload.FullName,
System.IO.FileMode.Open, System.IO.FileAccess.Read);
int i;
while ((i = fileInputStream.ReadByte()) != - 1)
{
Response.Write((char)i);
}
fileInputStream.Close();
Response.Flush();
Response.Close();
}
catch (System.Exception e)
// file IO errors
{
SupportClass.WriteStackTrace(e, Console.Error);
}
There are some articles that may help you to implement and solve errors:
download an excel file from byte() on https server
asp.net downloading file from ftp getting it as byte[] then saving it as file
Remote file Download via ASP.NET corrupted file
Response.WriteFile cannot download a large file
ProcessRequest method form downloader HttpHandler:
public void ProcessRequest(HttpContext context)
{
RequestTarget target = RequestTarget.ParseFromQueryString(context.Request.QueryString);
Guid requestId = new Guid(context.Request.QueryString["requestId"]);
string itemName = HttpUtility.UrlDecode(context.Request.QueryString["itemName"]);
if (target != null &&
!requestId.Equals(Guid.Empty) &&
!string.IsNullOrEmpty(itemName))
{
HttpResponse response = context.Response;
response.Buffer = false;
response.Clear();
response.AddHeader("Content-Disposition", "attachment;filename=\"" + itemName + "\"");
response.ContentType = "application/octet-stream";
int length = 100000, i = 0;
byte[] fileBytes;
do
{
fileBytes = WS.ReadFile(requestId, target, i * length, length);
i++;
response.OutputStream.Write(fileBytes, 0, fileBytes.Length);
response.Flush();
}
while (fileBytes != null && fileBytes.Length == length);
}
}
The whole problem is not to organize download action, but satisfy the condition that download action should do not block UI main thread: user presses 'Save' button on save file dialog, and is able to move on to the next page or perform another action. The solution written by Niranjan Kala causes, when the file is very large user isn't able to see another page until the download action has completed. I appreciate it, but it's not what I meant ...
If I understand you correctly, you want to make the browser initiate a new request for the file without reloading the current page. The easiest approach is probably to just create a link with target="_blank". Something like this should do:
Download file
If you provide a content type of application/octet-stream most browsers will save the file to disk.

Exporting to Outlook (.ics file) from a .NET web application

Basically I'm trying to create and export a .ics file from a C# web application so the user can save it, and open it in Outlook to add something to their calendar.
Here's the code I have at the moment...
string icsFile = createICSFile(description, startDate, endDate, summary);
//Get the paths required for writing the file to a temp destination on
//the server. In the directory where the application runs from.
string codeBase = Assembly.GetExecutingAssembly().CodeBase;
UriBuilder uri = new UriBuilder(codeBase);
string path = Uri.UnescapeDataString(uri.Path);
string assPath = Path.GetDirectoryName(path).ToString();
string fileName = emplNo + "App.ics";
string fullPath = assPath.Substring(0, assPath.Length-4);
fullPath = fullPath + #"\VTData\Calendar_Event\UserICSFiles";
string writePath = fullPath + #"\" + fileName; //writepath is the path to the file itself.
//If the file already exists, delete it so a new one can be written.
if (File.Exists(writePath))
{
File.Delete(writePath);
}
//Write the file.
using (System.IO.StreamWriter file = new System.IO.StreamWriter( writePath, true))
{
file.WriteLine(icsFile);
}
The above works perfectly. It writes the file and deletes any old ones first.
My main issue is how to get it to the user?
I tried redirecting the page straight to the path of the file:
Response.Redirect(writePath);
It does not work, and throws the following error:
htmlfile: Access is denied.
NOTE: If I copy and paste the contents of writePath, and paste it into Internet Explorer, a save file dialog box opens and allows me to download the .ics file.
I also tried to prompt a save dialog box to download the file,
System.Web.HttpResponse response = System.Web.HttpContext.Current.Response;
response.ClearContent();
response.Clear();
response.ContentType = "text/plain";
response.AddHeader("Content-Disposition", "inline; filename=" + fileName + ";");
response.TransmitFile(fullPath);
response.Flush(); // Error happens here
response.End();
It does not work either.
Access to the path 'C:\VT\VT-WEB MCSC\*some of path omitted *\VTData\Calendar_Event\UserICSFiles' is denied.
Access denied error again.
What may be the problem?
It sounds like you are trying to give the user the physical path to the file instead of the virtual path. Try changing the path so it ends up in the www.yoursite.com/date.ics format instead. This will allow your users to download it. The issue is that they don't have access to the C drive on your server.
Here is a link to how to do this:
http://www.west-wind.com/weblog/posts/2007/May/21/Downloading-a-File-with-a-Save-As-Dialog-in-ASPNET
Basically, you need the following line in your code:
Response.TransmitFile( Server.MapPath("~/VTData/Calendar_Event/UserICSFiles/App.ics") );
Use this instead of the Response.Redirect(writePath); and you should be good to go.

Posting Zip Files over HTTP No Longer Opening in Win 7 Zip Program

I have two bits of code. One which uploads a zipfile and a server which saves the upload to the drive. My problem is I upload a zip file which opens fine in the windows 7 default zipping program, but when I try to open it from the webserver it was posted too it won't open anymore with the error:
Windows cannot open the folder. The compressed zipped folder 'blah' is invalid.
Note1: The file opens completely fine in WinRar or other zip programs.
Note2: The original file and the file on the server are the exact same size on disk but the size of the one of the server is 200ish bytes bigger
Here is the code for uploading zips:
public static String UploadFile(String url, String filePath)
{
if (!File.Exists(filePath))
throw new FileNotFoundException();
try
{
using (var client = new WebClient())
{
byte[] result = client.UploadFile(url, filePath);
UTF8Encoding enc = new UTF8Encoding();
string response = enc.GetString(result);
return response;
}
}
catch (WebException webException)
{
HttpWebResponse httpWebResponse = webException.Response as HttpWebResponse;
return (httpWebResponse == null) ? webException.Message : httpWebResponse.StatusCode.ToString();
}
}
Here is the code on the server which saves the incoming file (exists in the page load of a .NET C# aspx page):
private void SaveZipFile()
{
string fileName;
string zipPath;
fileName = GenerateFileName();
zipPath = _hhDescriptor.GetDirectory(path => Server.MapPath(("./" + _serviceName + "\\" + path)) + "\\" + fileName + ".zip");
if (!Directory.Exists(zipPath))
{
Directory.CreateDirectory(Path.GetDirectoryName(zipPath));
}
Request.SaveAs(zipPath, false);
logger.Trace(string.Format("ManualUpload: Successfully saved uploaded zip file to {0}", zipPath));
}
Any ideas / or suggestions as possible places this could be breaking would be greatly appreciated!. I am probably saving some other random stuff along with the zip file.
UPDATE 1:
When I open the server's zip file in notepad it contains
-----------------------8cd8d0e69a0670b Content-Disposition: form-data;
name="file"; filename="filename.zip"
Content-Type: application/octet-stream
So my question is how to save the zip without capturing the header info.
I believe that the problem is using HttpRequest.SaveAs. I suspect that's saving the entire request, including the HTTP headers. Look at the file in a binary file editor and I suspect you'll find the headers at the start.
Use HttpRequest.Files to get at files uploaded as part of the request, and HttpPostedFile.SaveAs to save the file to disk.
You are writing the entirety of the request which may have some Multipart MIME separators in it. I think you need to use Request.Files.

Categories

Resources