I'm having a small issue with a WebAPI method that downloads a file when the user calls the route of the method.
The method itself is rather simple:
public HttpResponseMessage Download(string fileId, string extension)
{
var location = ConfigurationManager.AppSettings["FilesDownloadLocation"];
var path = HttpContext.Current.Server.MapPath(location) + fileId + "." + extension;
var result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new FileStream(path, FileMode.Open);
result.Content = new StreamContent(stream);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
return result;
}
The method works as expected - the first time I call it. The file is transmitted and my browser starts downloading the file.
However - if I call the same URL again from either my own computer or from any other - I get an error saying:
The process cannot access the file
'D:\...\App_Data\pdfs\test-file.pdf' because it is being used by
another process.
This error persists for about a minute - and then I can download the file again - but only once - and then I have to wait another minute or so until the file is unlocked.
Please note that my files are rather large (100-800 MB).
Am I missing something in my method? It almost seems like the stream locks the file for some time or something like that?
Thanks :)
It is because your file is locked by the first stream, you must specify a FileShare that allow it to be opened by multiple streams :
public HttpResponseMessage Download(string fileId, string extension)
{
var location = ConfigurationManager.AppSettings["FilesDownloadLocation"];
var path = HttpContext.Current.Server.MapPath(location) + fileId + "." + extension;
var result = new HttpResponseMessage(HttpStatusCode.OK);
var stream = new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.Read);
result.Content = new StreamContent(stream);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
return result;
}
Like that you allow multiple stream to open this file for read only.
See the MSDN documentation on that constructor overload.
Related
I'm trying to read a .zip file from S3 into a stream in C# and write the entries back to to originating folder in S3. I've looked at the myriad of SO questions, watched video, etc trying to get this right and I seem to be missing something. I'm now farther than I was originally, but I'm still getting stuck. (I really wish Amazon would just implement a decompress method because this seems to come up a lot but no such luck yet.) Here is my code currently:
private async Task<string> DecompressFile(string bucketName, string keystring)
{
AmazonS3Client client = new AmazonS3Client();
Stream fileStream = new MemoryStream();
string sourceDir = keystring.Split('/')[0];
GetObjectRequest request = new GetObjectRequest{ BucketName = bucketName, Key = keystring };
try
{
using (var response = await client.GetObjectAsync(request))
using (var arch = new ZipArchive(response.ResponseStream))
{
foreach (ZipArchiveEntry entry in arch.Entries)
{
fileStream = entry.Open();
string newFile = sourceDir + "/" + entry.FullName;
using (Amazon.S3.Transfer.TransferUtility tranute = new Amazon.S3.Transfer.TransferUtility(client))
{
var upld = new Amazon.S3.Transfer.TransferUtilityUploadRequest();
upld.InputStream = fileStream;
upld.Key = newFile;
upld.BucketName = bucketName;
await tranute.UploadAsync(upld);
}
}
}
return $"Decompression complete for {keystring}...";
}
catch (Exception e)
{
ctxt.Logger.LogInformation($"Error decompressing file {keystring} from bucket {bucketName}. Please check the file and try again.");
ctxt.Logger.LogInformation(e.Message);
ctxt.Logger.LogInformation(e.StackTrace);
throw;
}
}
The error I keep hitting now is on the write process at await tranute.UploadAsync(upld). The error I'm getting is:
$exception {"This operation is not supported."} System.NotSupportedException
Here are the exception details:
System.NotSupportedException
HResult=0x80131515
Message=This operation is not supported.
Source=System.IO.Compression
StackTrace:
at System.IO.Compression.DeflateStream.get_Length()
at Amazon.S3.Transfer.TransferUtilityUploadRequest.get_ContentLength()
at Amazon.S3.Transfer.TransferUtility.IsMultipartUpload(TransferUtilityUploadRequest request)
at Amazon.S3.Transfer.TransferUtility.GetUploadCommand(TransferUtilityUploadRequest request, SemaphoreSlim asyncThrottler)
at Amazon.S3.Transfer.TransferUtility.UploadAsync(TransferUtilityUploadRequest request, CancellationToken cancellationToken)
at File_Ingestion.Function.<DecompressFile>d__13.MoveNext() in File-Ingestion\Function.cs:line 136
This exception was originally thrown at this call stack:
[External Code]
File_Ingestion.Function.DecompressFile(string, string) in Function.cs
Any help would be greatly appreciated.
Thanks!
I think the problem is that AWS needs to know the length of the file before it can be uploaded, but the stream returned by ZipArchiveEntry.Open doesn't know its length upfront.
See how the exception is thrown when TransferUtilityUploadRequest.ContentLength tries to call DeflateStream.Length (which always throws), where DeflateStream is ultimately the thing returned from ZipArchiveEntry.Open.
(It's slightly odd that DeflateStream doesn't report its own decompressed length. It certainly knows what it should be, but that's only an indication which might be wrong, so maybe it wants to avoid reporting a value which might be incorrect.)
I think that what you need to do is to buffer the extracted file in memory, before passing it to AWS. This way, we can find out the uncompressed length of the stream, and this will be correctly reported by MemoryStream.Length:
using var fileStream = entry.Open();
// Copy the fileStream into an in-memory MemoryStream
using var ms = new MemoryStream();
fileStream.CopyTo(ms);
ms.Position = 0;
string newFile = sourceDir + "/" + entry.FullName;
using (Amazon.S3.Transfer.TransferUtility tranute = new Amazon.S3.Transfer.TransferUtility(client))
{
var upld = new Amazon.S3.Transfer.TransferUtilityUploadRequest();
upld.InputStream = ms;
upld.Key = newFile;
upld.BucketName = bucketName;
await tranute.UploadAsync(upld);
}
Could be faster and more clean:
using var fileStream = entry.Open();
using (var output = new S3UploadStream(_s3Client, "<s3 bucket>", "<key_name>"))
{
await fileStream.CopyToAsync(output);
}
S3UploadStream class is taken from https://github.com/mlhpdx/s3-upload-stream
I'm trying to serve a file to the users that is packed inside a zip archive on the server.
The project is ASP.NET Core 5.0 MVC project.
I managed to do it by using ZipArchiveEntry.Open() and copying that to a memory stream.
string zipFile = #"D:\all_installs.zip";
using (FileStream fs = new FileStream(zipFile, FileMode.Open))
{
using (ZipArchive zip = new ZipArchive(fs))
{
ZipArchiveEntry entry = zip.Entries.FirstOrDefault(x => x.FullName == "downloadable file.iso");
string name = entry.FullName;
string baseName = Path.GetFileName(name);
//open a stream to the zip entry
Stream stream = entry.Open();
//copy stream to memory
MemoryStream memoryStream = new MemoryStream();
stream.CopyTo(memoryStream); //big memory usage?
memoryStream.Position = 0;
return this.File(memoryStream, "application/octet-stream", baseName);
}
}
This would require a lot of RAM if there are many simultaneous downloads, so instead I would like to serve it directly from the archive, which I know will require the CPU while unpacking it, but that's fine since the download speed will be very limited anyways.
I tried serving stream directly, but I get the following error:
NotSupportedException: Stream does not support reading.
How can I serve the entry-stream directly?
Problem is both FileStream fs and ZipArchive zip are disposed here, so when it's time to write response and asp.net tries to read your zip entry (stream) - it's not available any more, since everything has been disposed.
You need to not dispose them right away but instead tell asp.net to dispose them when it's done writing the response. For that, HttpResponse has method RegisterForDispose, so you need to do something like that:
string zipFile = #"C:\tmp\record.zip";
FileStream fs = null;
ZipArchive zip = null;
Stream stream = null;
try {
fs = new FileStream(zipFile, FileMode.Open);
zip = new ZipArchive(fs);
ZipArchiveEntry entry = zip.Entries.First(x => x.FullName == "24fa535b-2fc9-4ce5-96f4-2ff1ef0d9b64.json");
string name = entry.FullName;
string baseName = Path.GetFileName(name);
//open a stream to the zip entry
stream = entry.Open();
return this.File(stream, "application/octet-stream", baseName);
}
finally {
if (stream != null)
this.Response.RegisterForDispose(stream);
if (zip != null)
this.Response.RegisterForDispose(zip);
if (fs != null)
this.Response.RegisterForDispose(fs);
}
Now asp.net will first write the response, then dispose all your disposables for you.
I want to call a web api method and have it allow the user to download a zip file that I create in memory. I also want to create the entries in memory as well.
I'm having trouble getting the server to correctly output the download.
Here is my web api method:
[HttpGet]
[Route("api/downloadstaffdata")]
public HttpResponseMessage DownloadStaffData()
{
var response = new HttpResponseMessage(HttpStatusCode.OK);
using (var stream = new MemoryStream())
{
using (var archive = new ZipArchive(stream, ZipArchiveMode.Create, true))
{
//future for loop to create entries in memory from staff list
var entry = archive.CreateEntry("bob.txt");
using (var writer = new StreamWriter(entry.Open()))
{
writer.WriteLine("Info for: Bob");
}
//future add staff images as well
}
stream.Seek(0, SeekOrigin.Begin);
response.Content = new StreamContent(stream);
}
response.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment")
{
FileName = "staff_1234_1.zip"
};
response.Content.Headers.ContentType = new System.Net.Http.Headers.MediaTypeHeaderValue("application/zip");
return response;
}
Here is my calling js code:
window.open('api/downloadstaffdata');
Here is the response from Chrome:
net::ERR_CONNECTION_RESET
I don't know what I'm doing wrong. I've already searched SO and read the articles about creating the zip file, but I can't get passed the connection reset error when trying to return the zip archive to the client.
Any ideas?
You have your memory stream inside a using block. As such, your memory stream are being disposed before your controller has the chance to write it out (hence the ERR_CONNECTION_RESET).
A MemoryStream does not need to be disposed explicitly (its various derived type may need to be, but not the MemoryStream itself). Garbage Collector can clean it up automatically.
In ASP.NET webapi, I send a temporary file to client. I open a stream to read the file and use the StreamContent on the HttpResponseMessage. Once the client receives the file, I want to delete this temporary file (without any other call from the client)
Once the client recieves the file, the Dispose method of HttpResponseMessage is called & the stream is also disposed. Now, I want to delete the temporary file as well, at this point.
One way to do it is to derive a class from HttpResponseMessage class, override the Dispose method, delete this file & call the base class's dispose method. (I haven't tried it yet, so don't know if this works for sure)
I want to know if there is any better way to achieve this.
Actually your comment helped solve the question... I wrote about it here:
Delete temporary file sent through a StreamContent in ASP.NET Web API HttpResponseMessage
Here's what worked for me. Note that the order of the calls inside Dispose differs from your comment:
public class FileHttpResponseMessage : HttpResponseMessage
{
private string filePath;
public FileHttpResponseMessage(string filePath)
{
this.filePath = filePath;
}
protected override void Dispose(bool disposing)
{
base.Dispose(disposing);
Content.Dispose();
File.Delete(filePath);
}
}
Create your StreamContent from a FileStream having DeleteOnClose option.
return new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(
new FileStream("myFile.txt", FileMode.Open,
FileAccess.Read, FileShare.None, 4096, FileOptions.DeleteOnClose)
)
};
I did it by reading the file into a byte[] first, deleting the file, then returning the response:
// Read the file into a byte[] so we can delete it before responding
byte[] bytes;
using (var stream = new FileStream(path, FileMode.Open))
{
bytes = new byte[stream.Length];
stream.Read(bytes, 0, (int)stream.Length);
}
File.Delete(path);
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
result.Content = new ByteArrayContent(bytes);
result.Content.Headers.ContentType = new MediaTypeHeaderValue("application/octet-stream");
result.Content.Headers.Add("content-disposition", "attachment; filename=foo.bar");
return result;
We are providing files that are saved in our database and the only way to retrieve them is by going by their id as in:
www.AwesomeURL.com/AwesomeSite.aspx?requestedFileId=23
Everything is working file as I am using the WebClient Class.
There's only one issue that I am facing:
How can I get the real filename?
My code looks like this atm:
WebClient client = new WebClient ();
string url = "www.AwesomeURL.com/AwesomeSite.aspx?requestedFileId=23";
client.DownloadFile(url, "IDontKnowHowToGetTheRealFileNameHere.txt");
All I know is the id.
This does not happen when I try accessing url from the browser where it get's the proper name => DownloadedFile.xls.
What's the proper way to get the correct response?
I had the same problem, and I found this class: System.Net.Mime.ContentDisposition.
using (WebClient client = new WebClient()){
client.OpenRead(url);
string header_contentDisposition = client.ResponseHeaders["content-disposition"];
string filename = new ContentDisposition(header_contentDisposition).FileName;
...do stuff...
}
The class documentation suggests it's intended for email attachments, but it works fine on the server I used to test, and it's really nice to avoid the parsing.
Here is the full code required, assuming the server has applied content-disposition header:
using (WebClient client = new WebClient())
{
using (Stream rawStream = client.OpenRead(url))
{
string fileName = string.Empty;
string contentDisposition = client.ResponseHeaders["content-disposition"];
if (!string.IsNullOrEmpty(contentDisposition))
{
string lookFor = "filename=";
int index = contentDisposition.IndexOf(lookFor, StringComparison.CurrentCultureIgnoreCase);
if (index >= 0)
fileName = contentDisposition.Substring(index + lookFor.Length);
}
if (fileName.Length > 0)
{
using (StreamReader reader = new StreamReader(rawStream))
{
File.WriteAllText(Server.MapPath(fileName), reader.ReadToEnd());
reader.Close();
}
}
rawStream.Close();
}
}
If the server did not set up this header, try debugging and see what ResponseHeaders you do have, one of them will probably contain the name you desire. If the browser show the name, it must come from somewhere.. :)
You need to look at the content-disposition header, via:
string disposition = client.ResponseHeaders["content-disposition"];
a typical example would be:
"attachment; filename=IDontKnowHowToGetTheRealFileNameHere.txt"
I achieve this with the code of wst.
Here is the full code to download the url file in c:\temp folder
public static void DownloadFile(string url)
{
using (WebClient client = new WebClient())
{
client.OpenRead(url);
string header_contentDisposition = client.ResponseHeaders["content-disposition"];
string filename = new ContentDisposition(header_contentDisposition).FileName;
//Start the download and copy the file to the destinationFolder
client.DownloadFile(new Uri(url), #"c:\temp\" + filename);
}
}
You can use HTTP content-disposition header to suggest filenames for the content you are providing:
Content-Disposition: attachment; filename=downloadedfile.xls;
So, in your AwesomeSite.aspx script, you would set the content-disposition header. In your WebClient class you would retrieve that header to save the file as suggested by your AwesomeSite site.
Although the solution proposed by Shadow Wizard works well for text files, I needed to support downloading binary files, such as pictures and executables, in my application.
Here is a small extension to WebClient that does the trick. Download is asynchronous. Also default value for file name is required, because we don't really know if the server would send all the right headers.
static class WebClientExtensions
{
public static async Task<string> DownloadFileToDirectory(this WebClient client, string address, string directory, string defaultFileName)
{
if (!Directory.Exists(directory))
throw new DirectoryNotFoundException("Downloads directory must exist");
string filePath = null;
using (var stream = await client.OpenReadTaskAsync(address))
{
var fileName = TryGetFileNameFromHeaders(client);
if (string.IsNullOrWhiteSpace(fileName))
fileName = defaultFileName;
filePath = Path.Combine(directory, fileName);
await WriteStreamToFile(stream, filePath);
}
return filePath;
}
private static string TryGetFileNameFromHeaders(WebClient client)
{
// content-disposition might contain the suggested file name, typically same as origiinal name on the server
// Originally content-disposition is for email attachments, but web servers also use it.
string contentDisposition = client.ResponseHeaders["content-disposition"];
return string.IsNullOrWhiteSpace(contentDisposition) ?
null :
new ContentDisposition(contentDisposition).FileName;
}
private static async Task WriteStreamToFile(Stream stream, string filePath)
{
// Code below will throw generously, e. g. when we don't have write access, or run out of disk space
using (var outStream = new FileStream(filePath, FileMode.CreateNew))
{
var buffer = new byte[8192];
while (true)
{
int bytesRead = await stream.ReadAsync(buffer, 0, buffer.Length);
if (bytesRead == 0)
break;
// Could use async variant here as well. Probably helpful when downloading to a slow network share or tape. Not my use case.
outStream.Write(buffer, 0, bytesRead);
}
}
}
}
Ok, my turn.
I had a few things in mind when I tried to "download the file":
Use only HttpClient. I had a couple of extension methods over it, and it wasn't desirable to create other extensions for WebClient.
It was mandatory for me also to get a File name.
I had to write the result to MemoryStream but not FileStream.
Solution
So, for me, it turned out to be this code:
// assuming that httpClient created already (including the Authentication cumbersome)
var response = await httpClient.GetAsync(absoluteURL); // call the external API
// reading file name from HTTP headers
var fileName = response.Content.Headers.ContentDisposition.FileNameStar; // also available to read from ".FileName"
// reading file as a byte array
var fileBiteArr = await response.Content
.ReadAsByteArrayAsync()
.ConfigureAwait(false);
var memoryStream = new MemoryStream(fileBiteArr); // memory streamed file
Test
To test that the Stream contains what we have, we can check it by converting it to file:
// getting the "Downloads" folder location, can be anything else
string pathUser = Environment.GetFolderPath(Environment.SpecialFolder.UserProfile);
string downloadPath = Path.Combine(pathUser, "Downloads\\");
using (FileStream file =
new FileStream(
$"{downloadPath}/file.pdf",
FileMode.Create,
FileAccess.Write))
{
byte[] bytes = new byte[memoryStream .Length];
memoryStream.Read(bytes, 0, (int)memoryStream.Length);
file.Write(bytes, 0, bytes.Length);
memoryStream.Close();
}