How to dispose file stream in api ?
Let assume that I need to call this api 10 times.
[HttpGet("{fileName}")]
public async Task<IActionResult> Get(string fileName)
{
var res = File.Open(path, FileMode.Open);
var file = File(res, "application/zip", fileName);
return file;
}
I can't dispose stream before is returned from api method.
When I call it second time I will get exception:
The process cannot access the file 'C:\test\example.zip' because it is
being used by another process.
First of all, remember about concurrency and thread safety. (many request can pass to your controller at the same time. And in this case if you are writing som,ething to the file - the behaviour of app can be wrong).
If you are not writing to the file (only reading), then you can just specify the sharing mode for other threads like this:
using(var file = File.Open(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite)){
//your code goes here
}
Related
While this particular question is about embeddedIO and FileReponseAsync but it probably has more to do with the handling of streams and async in Tasks in C#.
I am using EmbeddedIO (I needed a quick and dirty web server, and so far it has worked like a charm -- however the lack of documentation is a bit frustrating), but I am attempting to return a file with the following code:
var file = new FileInfo(Path.Combine(FileLocations.TemplatePath, templateFile.FilePath));
string fileExtension = Path.GetExtension(templateFile.FilePath);
return this.FileResponseAsync(file, MimeTypes.DefaultMimeTypes.Value.ContainsKey(fileExtension) ?
MimeTypes.DefaultMimeTypes.Value[fileExtension] : "application/octet-stream");
I get the following error:
Message
Failing module name: Web API Module
Cannot access a closed file.
Stack Trace
at System.IO.__Error.FileNotOpen()
Which makes sense, since in the EmbedIO code FileResponseAsync looks like:
using (FileStream fileStream = file.OpenRead())
return context.BinaryResponseAsync(fileStream, ct, useGzip);
and the filestream will be disposed as soon as the BinaryReponse returns. I've solved the problem by changing my code to not dispose of the filestream:
var fileStream = file.OpenRead();
return this.BinaryResponseAsync(fileStream);
While this works, it seems wrong to rely on Garbage Collection to dispose of these files at a later date. How are resources like this (not only in EmbeddedIO but in this modern async world) supposed to be handled?
This is updated question, there used to be a bug in my code
I would like to be able to send chunks of data over to the client.
Anything will be appreciated.
Is there a way to provide to asp.net core more control to how it streams the data.
I am worried how the below code scales.
Could someone please advise how to go streaming data through a web api in asp.net core?
The answer that was provided and the code below works. I am not sure how it scales though?
Is it possible to retrieve chunks of data and write them to the request, with only getting the chunks into memory. So i would be able to download very large files.
using (FileStream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
return File(System.IO.File.OpenRead(filePath), "audio/mpeg");
}
Applying the FileStream approach - as already mentioned - use the FileStream constructor that accepts a bufferSize argument, which specifies the amount of bytes being read into memory.
(You can overrule the default value (4096) to fit your environment.)
public FileStream(string path, FileMode mode, FileAccess access, FileShare share, int bufferSize);
bufferSize:
A positive System.Int32 value greater than 0 indicating
the buffer size.
The default buffer size is 4096.
public IActionResult GetFile()
{
var filePath = #"c:\temp\file.mpg"; // Your path to the audio file.
var bufferSize = 1024;
var fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read, bufferSize);
return File(fileStream, "audio/mpeg");
}
Note that there's no need to dispose the fileStream; the File method takes care of this.
To clarify:
When passing in a FileStream, its content is being read in chunks (matching the configured buffersize).
Concrete, this means that its Read method (int Read (byte[] array, int offset, int count)) gets executed repeatedly untill all bytes have been read, ensuring that no more than the given number of bytes are stored in memory.
So the scalability is within the less memory usage, as memory is a resource that can come under pressure if the size of the file is high, especially in combination with a high read frequency (of this or of other files)
which might cause out of memory problems.
Posting as a community wiki, since it doesn't technically answer the question, but suggested code won't work as a comment.
You can return a stream directly from FileResult, so there's no need to manually read from it. In fact, your code doesn't actually "stream", since you're basically reading the whole stream into memory, and then returning the byte[] at the end. Instead, just do:
using (FileStream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
return File(fileStream, "audio/mpeg");
}
Or even simpler, just return the file path, and let FileResult handle it completely:
return File(System.IO.File.OpenRead(filePath), "audio/mpeg");
I am calling REST API which is accepting Stream to upload file from local device, so for that right now I am using following code to get Stream from a file and than closing that stream after it get's uploaded:
var stream = new FileStream(file, FileMode.Open, FileAccess.ReadWrite);
The problem with the above approach is that, until entire file gets uploaded to server user don't have any chance to delete that file because stream of that file is open, what would be the solution to resolve this issue?
If your typical file is reasonably sized (and I'm hoping you won't be uploading 2GB+ files to a REST API), you could always just read the stream into memory and before feeding it to your API, like so:
using (MemoryStream memoryStream = new MemoryStream())
{
using (FileStream fileStream = new FileStream(file, FileMode.Open, FileAccess.ReadWrite)) {
fileStream.CopyTo(memoryStream);
}
memoryStream.Position = 0; // Reset to origin.
// Now use the MemoryStream as you would a FileStream:
api.Upload(memoryStream);
}
Another alternative is to create a temp copy of the file on your hard drive and feed that to the API - but then dealing with cleanup can become a bit cumbersome. FileOptions.DeleteOnClose is your friend and may very well suffice for your purposes, but it still offers no bulletproof guarantees.
I am little confused between two different constructor of StreamReader class i.e
1.StreamReader(Stream)
I know it takes stream bytes as input but the respective output is same.
here is my code using StreamReader(Stream) contructor
string filepath=#"C:\Users\Suchit\Desktop\p022_names.txt";
using(FileStream fs = new FileStream(filepath,FileMode.Open,FileAccess.Read))
{
using(StreamReader sw = new StreamReader(fs))
{
while(!sw.EndOfStream)
{
Console.WriteLine(sw.ReadLine());
}
}
}
2. StreamReader(String)
This conrtuctor takes the physical file path,
where our respective file exists but the output is again same.
Here is my code using StreamReader(String)
string filepath=#"C:\Users\Suchit\Desktop\p022_names.txt";
using (StreamReader sw = new StreamReader(filePath))
{
while(!sw.EndOfStream)
{
Console.WriteLine(sw.ReadLine());
}
}
So, Which one is better? When and where we should use respective code,
so that our code become more optimized and readable?
A class StreamReader (as well as StreamWriter) is just a wrapper for
FileStream, It needs a FileStream to read/write something to file.
So basically you have two options (ctor overloads) :
Create FileStream explicitly by yourself and wrap SR around it
Let the SR create FileStream for you
Consider this scenario :
using (FileStream fs = File.Open(#"C:\Temp\1.pb", FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
using (StreamReader reader = new StreamReader(fs))
{
// ... read something
reader.ReadLine();
using (StreamWriter writer = new StreamWriter(fs))
{
// ... write something
writer.WriteLine("hello");
}
}
}
Both reader and writer works with the same filestream. Now if we change it to :
using (StreamReader reader = new StreamReader(#"C:\Temp\1.pb"))
{
// ... read something
reader.ReadLine();
using (StreamWriter writer = new StreamWriter(#"C:\Temp\1.pb"))
{
// ... write something
writer.WriteLine("hello");
}
}
System.IOException is thrown "The process cannot access the file C:\Temp\1.pb because it is being used by another process... This is because we try to open file with FileStream2 while we still use it in FileStream1. So generally speaking if you want to open file, perform one r/w operation and close it you're ok with StreamReader(string) overload. In case you would like to use the same FileStream for multiple operations or if by any other reason you'd like to have more control over Filestream then you should instantiate it first and pass to StreamReader(fs) .
Which one is better?
None. Both are same. As the name suggests StreamReader is used to work with streams; When you create an instance of StreamReader with "path", it will create the FileStream internally.
When and where we should use respective code
When you have the Stream upfront, use the overload which takes a Stream otherwise "path".
One advantage of using Stream overload is you can configure the FileStream as you want. For example if you're going to work with asynchronous methods, you need to open the file with asynchronous mode. If you don't then operation will not be truly asynchronous.
When at doubt don't hesitate to check the source yourself.
Note that the Stream overload doesn't take a FileStream. This allows you to read data from any sub class of Stream, which allows you to do things like read the result of a web request, read unzipped data, or read decrypted data.
Use the string path overload if you only want to read from a file and you don't need to use the FileStream for anything else. It just saves you from writing a line of code:
using (var stream = File.OpenRead(path))
using (var reader = new StreamReader(stream))
{
...
}
File.OpenText also does the same thing.
Both are same, just overloads, use one of them according to your need. If you have a local file then you can use StreamReader(string path) otherwise if you have just stream from online or some other source then other overload helps you i-e StreamReader(Stream stream)
Well after searching the new open source reference. You can see that the latter internaly expands to the former one. So passing a raw file path into the StreamReader makes him expand it internaly to a FileStream. For me this means, both are equivalent and you can use them as you prefer it.
My personal opinion is to use the latter one, because its less code to write and its more explicit. I don't like the way java is doing it with there thousand bytereader, streamreader, outputreaderreader and so on...
Basically both works same that is doing UTF8Encodeing and use Buffer of 1024 bytes.
But The StreamReader object calls Dispose() on the provided Stream object when StreamReader.Dispose is called.
You can refer the following Stream and String
You can use either of them depending on what you have in hand Stream or String file path.
Hope this makes it clear
StreamReader(string) is just an overload of StreamReader(Stream).
In the context of your question, you are probably better off using the StreamReader(string) overload, just because it means less code. StreamReader(Stream) might be minutely faster but you have to create a FileStream using the string you could have just put straight into the StreamReader, so whatever benefit you gained is lost.
Basically, StreamReader(string) is for files with static or easily mapped paths (as appears to be the case for you), while StreamReader(Stream) could be thought of as a fallback in case you have access to a file programmatically, but it's path is difficult to pin down.
Considered we have two methods:
Task DownloadFromAToStreamAsync(Stream destinationStream);
Task UploadToBFromStreamAsync(Stream sourceStream);
Now we need to download content from A and upload it to B in a single operation.
One of the solutions:
using (var stream = new MemoryStream())
{
await DownloadFromAToStreamAsync(stream);
stream.Seek(0, SeekOrigin.Begin);
await UploadToBFromStreamAsync(stream);
}
But this solution requires the whole stream content to be loaded in memory.
How to solve the task more efficiently?
Change the Download method to accept an additional size parameter which indicates how much to download. Then loop downloading and uploading untill a download returns an empty stream.