This is updated question, there used to be a bug in my code
I would like to be able to send chunks of data over to the client.
Anything will be appreciated.
Is there a way to provide to asp.net core more control to how it streams the data.
I am worried how the below code scales.
Could someone please advise how to go streaming data through a web api in asp.net core?
The answer that was provided and the code below works. I am not sure how it scales though?
Is it possible to retrieve chunks of data and write them to the request, with only getting the chunks into memory. So i would be able to download very large files.
using (FileStream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
return File(System.IO.File.OpenRead(filePath), "audio/mpeg");
}
Applying the FileStream approach - as already mentioned - use the FileStream constructor that accepts a bufferSize argument, which specifies the amount of bytes being read into memory.
(You can overrule the default value (4096) to fit your environment.)
public FileStream(string path, FileMode mode, FileAccess access, FileShare share, int bufferSize);
bufferSize:
A positive System.Int32 value greater than 0 indicating
the buffer size.
The default buffer size is 4096.
public IActionResult GetFile()
{
var filePath = #"c:\temp\file.mpg"; // Your path to the audio file.
var bufferSize = 1024;
var fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read, bufferSize);
return File(fileStream, "audio/mpeg");
}
Note that there's no need to dispose the fileStream; the File method takes care of this.
To clarify:
When passing in a FileStream, its content is being read in chunks (matching the configured buffersize).
Concrete, this means that its Read method (int Read (byte[] array, int offset, int count)) gets executed repeatedly untill all bytes have been read, ensuring that no more than the given number of bytes are stored in memory.
So the scalability is within the less memory usage, as memory is a resource that can come under pressure if the size of the file is high, especially in combination with a high read frequency (of this or of other files)
which might cause out of memory problems.
Posting as a community wiki, since it doesn't technically answer the question, but suggested code won't work as a comment.
You can return a stream directly from FileResult, so there's no need to manually read from it. In fact, your code doesn't actually "stream", since you're basically reading the whole stream into memory, and then returning the byte[] at the end. Instead, just do:
using (FileStream fileStream = new FileStream(filePath, FileMode.Open, FileAccess.Read, FileShare.Read))
{
return File(fileStream, "audio/mpeg");
}
Or even simpler, just return the file path, and let FileResult handle it completely:
return File(System.IO.File.OpenRead(filePath), "audio/mpeg");
Related
I am calling REST API which is accepting Stream to upload file from local device, so for that right now I am using following code to get Stream from a file and than closing that stream after it get's uploaded:
var stream = new FileStream(file, FileMode.Open, FileAccess.ReadWrite);
The problem with the above approach is that, until entire file gets uploaded to server user don't have any chance to delete that file because stream of that file is open, what would be the solution to resolve this issue?
If your typical file is reasonably sized (and I'm hoping you won't be uploading 2GB+ files to a REST API), you could always just read the stream into memory and before feeding it to your API, like so:
using (MemoryStream memoryStream = new MemoryStream())
{
using (FileStream fileStream = new FileStream(file, FileMode.Open, FileAccess.ReadWrite)) {
fileStream.CopyTo(memoryStream);
}
memoryStream.Position = 0; // Reset to origin.
// Now use the MemoryStream as you would a FileStream:
api.Upload(memoryStream);
}
Another alternative is to create a temp copy of the file on your hard drive and feed that to the API - but then dealing with cleanup can become a bit cumbersome. FileOptions.DeleteOnClose is your friend and may very well suffice for your purposes, but it still offers no bulletproof guarantees.
I've been working on a project recently that involves a lot of FileStreaming, something which I've not really touched on before.
To try and better acquaint myself with the principles of such methods, I've written some code that (theoretically) downloads a file from one dir to another, and gone through it step by step, commenting in my understanding of what each step achieves, like so...
Get fileinfo object from DownloadRequest Object
RemoteFileInfo fileInfo = svr.DownloadFile(request);
DownloadFile method in WCF Service
public RemoteFileInfo DownloadFile(DownloadRequest request)
{
RemoteFileInfo result = new RemoteFileInfo(); // create empty fileinfo object
try
{
// set filepath
string filePath = System.IO.Path.Combine(request.FilePath , #"\" , request.FileName);
System.IO.FileInfo fileInfo = new System.IO.FileInfo(filePath); // get fileinfo from path
// check if exists
if (!fileInfo.Exists)
throw new System.IO.FileNotFoundException("File not found",
request.FileName);
// open stream
System.IO.FileStream stream = new System.IO.FileStream(filePath,
System.IO.FileMode.Open, System.IO.FileAccess.Read);
// return result
result.FileName = request.FileName;
result.Length = fileInfo.Length;
result.FileByteStream = stream;
}
catch (Exception ex)
{
// do something
}
return result;
}
Use returned FileStream from fileinfo to read into a new write stream
// set new location for downloaded file
string basePath = System.IO.Path.Combine(#"C:\SST Software\DSC\Compilations\" , compName, #"\");
string serverFileName = System.IO.Path.Combine(basePath, file);
double totalBytesRead = 0.0;
if (!Directory.Exists(basePath))
Directory.CreateDirectory(basePath);
int chunkSize = 2048;
byte[] buffer = new byte[chunkSize];
// create new write file stream
using (System.IO.FileStream writeStream = new System.IO.FileStream(serverFileName, FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
do
{
// read bytes from fileinfo stream
int bytesRead = fileInfo.FileByteStream.Read(buffer, 0, chunkSize);
totalBytesRead += (double)bytesRead;
if (bytesRead == 0) break;
// write bytes to output stream
writeStream.Write(buffer, 0, bytesRead);
} while (true);
// report end
Console.WriteLine(fileInfo.FileName + " has been written to " + basePath + " - Done!");
writeStream.Close();
}
What I was hoping for is any clarification or expansion on what exactly happens when using a FileStream.
I can achieve the download, and now I know what code I need to write in order to perform such a download, but I would like to know more about why it works. I can find no 'beginner-friendly' or step by step explanations on the web.
What is happening here behind the scenes?
A stream is just an abstraction, fundamentally it works like a pointer within a collection of data.
Take the example string of "Hello World!" for example, it is just a collection of characters, which are fundamentally just bytes.
As a stream, it could be represented to have:
A length of 12 (possibly more including termination characters etc)
A position in the stream.
You read a stream by moving the position around and requesting data.
So reading the text above could be (in pseudocode) seen to be like this:
do
get next byte
add gotten byte to collection
while not the end of the stream
the entire data is now in the collection
Streams are really useful when it comes to accessing data from sources such as the file system or remote machines.
Imagine a file that is several gigabytes in size, if the OS loaded all of that into memory any time a program wanted to read it (say a video player), there would be a lot of problems.
Instead, what happens is the program requests access to the file, and the OS returns a stream; the stream tells the program how much data there is, and allows it to access that data.
Depending on implementation, the OS may load a certain amount of data into memory ahead of the program accessing it, this is known as a buffer.
Fundamentally though, the program just requests the next bit of data, and the OS either gets it from the buffer, or from the source (e.g. the file on disk).
The same principle applies to streams between different computers, except requesting the next bit of data may very well involve a trip to the remote machine to request it.
The .NET FileStream class and the Stream base class, all just defer to the windows systems for working with streams in the end, there's nothing particularly special about them, it's just what you can do with the abstraction that makes them so powerful.
Writing to a stream is just the same, but it just puts data into the buffer, ready for the requester to access.
Infinite Data
As a user pointed out, streams can be used for data of indeterminate length.
All stream operations take time, so reading a stream is typically a blocking operation that will wait until data is available.
So you could loop forever while the stream is still open, and just wait for data to come in - an example of this in practice would be a live video broadcast.
I've since located a book - C# 5.0 All-In-One For Dummies - It explains everything about all Stream classes, how they work, which one is most appropriate and more.
Only been reading about 30 minutes, already have such a better understanding. Excellent guide!
I have a file that gets GBs of data written to it over time (looping once it reaches the end). I would like to create the file ahead of time and preset the storage so that the required storage is never taken up by other downloads during the writing to the file. This is done using visual studio 2012 in C#.
I have tried:
if (fileSizeRequirement or fileName is changed) //if filePath or file size is changed
{
//Open or create the file, set the file to size requirement, close the filestream
fileStream = new FileStream(fileName, FileMode.OpenOrCreate, FileAccess.ReadWrite);
fileStream.SetLength((long)fileSizeRequirement);
fileStream.Close();
}
1) Is this an appropriate way to "preallocate" space for a folder?
2) Will the SetLength require a seek to the beginning after setting the length or does the position in the folder stay at the beginning?
3) What is the correct way to achieve file preallocation of storage space?
Thanks ahead of time and I appreciate any suggestions.
Using SetLength is a common approach although I'd generally use a using statement here.
using(var fileStream = new FileStream(fileName, FileMode.OpenOrCreate, FileAccess.ReadWrite))
{
fileStream.SetLength((long)fileSizeRequirement);
}
Calling fileStream.Position straight after SetLength yields 0 so you shouldn't need to seek to the beginning.
I have a text file that's appended to over time and periodically I want to truncate it down to a certain size, e.g. 10MB, but keeping the last 10MB rather than the first.
Is there any clever way to do this? I'm guessing I should seek to the right point, read from there into a new file, delete old file and rename new file to old name. Any better ideas or example code? Ideally I wouldn't read the whole file into memory because the file could be big.
Please no suggestions on using Log4Net etc.
If you're okay with just reading the last 10MB into memory, this should work:
using(MemoryStream ms = new MemoryStream(10 * 1024 * 1024)) {
using(FileStream s = new FileStream("yourFile.txt", FileMode.Open, FileAccess.ReadWrite)) {
s.Seek(-10 * 1024 * 1024, SeekOrigin.End);
s.CopyTo(ms);
s.SetLength(10 * 1024 * 1024);
s.Position = 0;
ms.Position = 0; // Begin from the start of the memory stream
ms.CopyTo(s);
}
}
You don't need to read the whole file before writing it, especially not if you're writing into a different file. You can work in chunks; reading a bit, writing, reading a bit more, writing again. In fact, that's how all I/O is done anyway. Especially with larger files you never really want to read them in all at once.
But what you propose is the only way of removing data from the beginning of a file. You have to rewrite it. Raymond Chen has a blog post on exactly that topic, too.
I tested the solution from "false" but it doesn't work for me, it trims the file but keeps the beginning of it, not the end.
I suspect that CopyTo copies the whole stream instead of starting for the stream position. Here's how I made it work:
int trimSize = 10 * 1024 * 1024;
using (MemoryStream ms = new MemoryStream(trimSize))
{
using (FileStream s = new FileStream(logFilename, FileMode.Open, FileAccess.ReadWrite))
{
s.Seek(-trimSize, SeekOrigin.End);
byte[] bytes = new byte[trimSize];
s.Read(bytes, 0, trimSize);
ms.Write(bytes, 0, trimSize);
ms.Position = 0;
s.SetLength(trimSize);
s.Position = 0;
ms.CopyTo(s);
}
}
You could read the file into a binary stream and use the seek method to just retrieve the last 10MB of data and load them into memory. Then you save this stream into a new file and delete the old one. The text data could be truncated though so you have to decide if this is exceptable.
Look here for an example of the Seek method:
http://www.dotnetperls.com/seek
I am working with SQL Server 2008 and FileStream data types to save large files in the database.
So far everything is working fine but the problem is my upload method looks like this:
public static void UploadFileToDatabase(string location)
{
FileStream fs = new FileStream(location, FileMode.Open, FileAccess.Read);
byte[] data = new byte[fs.Length];
fs.Read(data, 0, System.Convert.ToInt32(fs.Length));
fs.Close();
SaveToDatabaseMethod(data)
data = null;
}
Obviously I am saving the files in memory and then uploading them to server which I think is a really bad practice so is there anyway I can at least limit the amount of memory needed for this ?
What is the best practice in my case ?
Filestream has a WriteByte method
WriteByte
See The Insert Data Example
Not sure this is best practice as it keeps memory down but adds steps. Would need to test it out. If the entire files does not load need logic to back it out.