In an ASP.NET application, I am using the code shown below to read video files from a shared path and play them in the browser. It is working for files which are less than 300 mb in size, but it is throwing out of memory exceptions for 650 mb file and for greater than 2 gb it is throwing
The file is too long. This operation is currently limited to supporting files less than 2 gigabytes in size.
My code:
ImpersonationHelper.Impersonate(ConfigurationManager.AppSettings["Domain"], ConfigurationManager.AppSettings["UserName"],ConfigurationManager.AppSettings["Password"], delegate
{
FileBuffer = System.IO.File.ReadAllBytes(filepath);
if (FileBuffer != null)
{
Response.Buffer = true;
Response.ContentType = contenttype;
Response.AddHeader("Content-Disposition", "inline");
Response.AddHeader("content-length", FileBuffer.Length.ToString());
Response.BinaryWrite(FileBuffer);
Response.Flush();
HttpContext.Current.ApplicationInstance.CompleteRequest();
Response.Close();
Response.End();
}
});
Loading entire file into memory is bad idea. Instead of using FileBuffer array you can do below:
// ...
using (var stream = new FileStream(filepath, FileMode.Open, FileAccess.Read))
{
Response.AddHeader("content-length", stream.Length.ToString());
stream.CopyTo(Response.OutputStream);
}
You should also remove this line Response.Buffer = true; and add Response.BufferOutput = false; in the beginning. With BufferOutput == true web server saves entire output to memory and only after this sends it to client. We don't want this to happen.
Related
I get a request with which I create a file and return it to the client.
After the file is sent I want it deleted.
Since I get many request, files are big and memory is scarce, I don't want to buffer it in memory to send it.
The only method I got to work without buffering the whole file in memory was:
Response.TransmitFile(filepath)
The problem with this is that it does it asynchronously, so if I delete it after that call the file download is interrupted.
I tried calling Flush, adding the delete on a finally block but neither of those worked. I thought of inheriting HttpResponse to try modify TransmitFile, but it's a sealed class. I tried to use HttpResponse.ClientDisconnectedToken but either I don't understand how to use it correctly or it isn't working in this case.
How can I achieve this? Is there a better method than calling HttpResponse's TransmitFile? Always taking into account that this is an API, files can't be broken into different requests and that it doesn't load the full file in memory.
I'm not sure if it could help somehow, but my controller is inheriting from AbpApiController.
You create the file in a temp folder, just create a job to remove all files based on date/time. Maybe give the user 4 hours to download the file.
An implementation of TransmitFileWithLimit, It can be improve in many ways, but it works
Extension for HttpResponse
public static class HttpResponseExtensions
{
public static void TransmitFileWithLimit(this HttpResponse response, string path, int limit)
{
var buffer = new byte[limit];
var offset = 0;
response.ClearContent();
response.Clear();
response.ContentType = "text/plain";
using (var fileStream = File.OpenRead(path))
{
var lengthStream = fileStream.Length;
while (offset < lengthStream)
{
var lengthBytes = fileStream.Read(buffer, 0, limit);
var chars = System.Text.Encoding.ASCII.GetString(buffer, 0, lengthBytes).ToCharArray();
response.Write(chars, 0, lengthBytes);
offset += lengthBytes;
}
}
response.Flush();
response.End();
}
}
Inside Controller
public void Get()
{
var path = #"C:\temporal\bigfile.mp4";
var response = HttpContext.Current.Response;
response.ClearContent();
response.Clear();
response.AddHeader("Content-Disposition", "inline; filename=" + HttpUtility.UrlPathEncode(path));
response.ContentType = "text/plain";
response.AddHeader("Content-Length", new FileInfo(path).Length.ToString());
response.Flush();
HttpContext.Current.ApplicationInstance.CompleteRequest();
TransmitFileWithLimit(path, 10000);
File.Delete(path);
}
What you can do is to create a byte array from your file and then delete the file.
So, once it is created, you store it in memory, then you delete the file and return the byte array.
Check this answer to see an example.
I hope this helps.
I´m trying to play wav files with the audio element. This works great when I reference the file directly BUT when I try to stream it via a c# FileStream it stops working.
The code below work perfect for Chrome and Opera on PC but I am trying to get it to work on Ipad and Firefox.
I am using .net framwork 2.0
I have to use wav.
<audio id="audio" src="www.acme.com/streamer.aspx?music=test.wav" controls preload="auto"></audio> Not working
<audio id="audio" src="test.wav" controls preload="auto"></audio> Working
The stream function below:
string wavFileName = test.wav;
FileStream soundStream;
long FileSize;
soundStream = new FileStream(#"\\netshare\" + wavfileName, FileMode.Open, FileAccess.Read, FileShare.Read, 256);
FileSize = soundStream.Length;
byte[] Buffer = new byte[(int)FileSize];
int count = 0;
int offset = 0;
while ((count = soundStream.Read(Buffer, offset,
Buffer.Length)) > 0)
{
Response.OutputStream.Write(Buffer, offset, count);
}
Response.Buffer = true;
Response.Clear();
Response.AddHeader("Content-Type", "audio/wave");
Response.AddHeader("Content-Disposition", "attachment;filename=" + wavfileName);
Response.ContentType = "audio/wave";
Response.BinaryWrite(Buffer);
if (soundStream != null)
{
soundStream.Close();
}
Response.End();
(btw, this is my first post, hope i did not break any rules)
The code seems fine. I think you need to make sure that the wav file is in the write PCM format. Some browsers have issues playing certain PCMs. I found out that just because a browse(html5)) supports a wav file does not mean it will actually play all wav type.
You might also want to flush the Response.
The RedirectToAction does not display the View.
// Go populate and display PDF using XML file
DoPDF(stXML);
}
UpDateDropDown(model);
return RedirectToAction("ReportsSelection", "Reports");
Rendering Code:
private void DoPDF(String stXML)
{
string filename = string.Concat(Guid.NewGuid().ToString(), ".pdf");
PdfReader reader = new PdfReader(new RandomAccessFileOrArray(Request.MapPath(_NFCPage._NFReference.FM_NOFEAR_PDF)), null);
// Create the iTextSharp document
// Set the document to write to memory
using (MemoryStream memStream = new MemoryStream())
{
PdfStamper ps = new PdfStamper(reader, memStream);
// Populate the PDF with values in the XML file
AcroFields af = ps.AcroFields;
ParserXML(stXML, af);
ps.FormFlattening = false;
ps.Writer.CloseStream = false;
ps.Close();
byte[] buf = new byte[memStream.Position];
memStream.Position = 0;
memStream.Read(buf, 0, buf.Length);
// Set the appropriate ContentType
Response.ContentType = "Application/pdf";
// Get the physical path to the file
Response.AddHeader("Content-disposition", string.Format("attachment; filename={0};", filename));
// Write the file directly to the HTTP content output stream.
Response.Buffer = true;
Response.Clear();
Response.BinaryWrite(memStream.GetBuffer()); //Comment out to work
Response.End(); //Comment out to work
}
}
I have noticed that if I remove the last two lines in the DoPDF routine that it does display the view.
Response.End() will cause the server to send the HTTP response. Your browser at that point will consider the request completed and the redirect won't happen. Can you provide more context on what you're trying to accomplish? Then we can get a better idea of how to help you.
Edit: nvm, you have a Response.End() call there, that will end execution of the request and your redirect will obviously not work. If you're trying to flush the stream then you need to do Response.Flush() instead.
Don't handle files downloads in MVC, as you can see, it can cause problems...
return File(memStream, "Application/pdf", filename);
Will do everything for you.
MSDN
An user posts this article about how to use HttpResponse.Filter to compress large amounts of data. But what will happen if I try to transfer a 4G file? will it load the whole file in memory in order to compress it? or otherwise it will compress it chunk by chunk?
I mean, I'm doing this right now:
public void GetFile(HttpResponse response)
{
String fileName = "example.iso";
response.ClearHeaders();
response.ClearContent();
response.ContentType = "application/octet-stream";
response.AppendHeader("Content-Disposition", "attachment; filename=" + fileName);
response.AppendHeader("Content-Length", new FileInfo(fileName).Length.ToString());
using (FileStream fs = new FileStream(Path.Combine(HttpContext.Current.Server.MapPath("~/App_Data"), fileName), FileMode.Open))
using (DeflateStream ds = new DeflateStream(fs,CompressionMode.Compress))
{
Byte[] buffer = new Byte[4096];
Int32 readed = 0;
while ((readed = ds.Read(buffer, 0, buffer.Length)) > 0)
{
response.OutputStream.Write(buffer, 0, readed);
response.Flush();
}
}
}
So at the same time I'm reading, I'm compressing and sending it. Then I wanna know if HttpResponse.Filter do the same thing, or otherwise it will load the whole file in memory in order to compress it.
Also, I'm a little bit insecure about this... maybe is needed to load the whole file in memory to compress it... is it?
Cheers.
HttpResponse.Filter is a stream: You can write in it in chunks.
You are doing it correctly. You are using FileStream and DeflateStream to read from file and compress it. You are reading 4096 bytes at time and then writing them into the Response Stream. So all you are using is 4096 byte(and a little bit more) of memory.
I'm trying to create a method which continually streams a zip file as a user downloads it (so that there is no wasted streaming)
I added a thread.sleep to simulate latency
public override void ExecuteResult(ControllerContext context) {
HttpResponseBase response = context.HttpContext.Response;
response.Clear();
response.ClearContent();
response.ClearHeaders();
response.Cookies.Clear();
response.ContentType = ContentType;
response.ContentEncoding = Encoding.Default;
response.AddHeader("Content-Type", ContentType);
context.HttpContext.Response.AddHeader("Content-Disposition",
String.Format("attachment; filename={0}",
this.DownloadName));
int ind = 0;
using (ZipOutputStream zipOStream =
new ZipOutputStream(context.HttpContext.Response.OutputStream))
{
foreach (var file in FilesToZip)
{
ZipEntry entry = new ZipEntry(FilesToZipNames[ind++]);
zipOStream.PutNextEntry(entry);
Thread.Sleep(1000);
zipOStream.Write(file, 0, file.Length);
zipOStream.Flush();
}
zipOStream.Finish();
}
response.OutputStream.Flush();
}
It seems like the zip will not start streaming until all the files are files are zipped. Is there a way to stream continuously? Maybe with a different library?
Assuming the zip format is partial to streaming, your problem is that your response is being buffered by default. If you set HttpResponseBase.BufferOutput to false, it should start streaming immediately.
I think you might need to be flushing the outputstream rather than the zipostream to see any output to http. So response.OutputStream.Flush() would appear in your loop in that case. Not sure if it will actually solver your problem though.