I have a big file of memory size 42 mb. I want to download the file with less memory consumption.
Controller Code
public ActionResult Download()
{
var filePath = "file path in server";
FileInfo file = new FileInfo(filePath);
Response.ContentType = "application/zip";
Response.AppendHeader("Content-Disposition", "attachment; filename=folder.zip");
Response.TransmitFile(file.FullName);
Response.End();
}
alernative method tried with Stream
public ActionResult Download()
{
string failure = string.Empty;
Stream stream = null;
int bytesToRead = 10000;
long LengthToRead;
try
{
var path = "file path from server";
FileWebRequest fileRequest = (FileWebRequest)FileWebRequest.Create(path);
FileWebResponse fileResponse = (FileWebResponse)fileRequest.GetResponse();
if (fileRequest.ContentLength > 0)
fileResponse.ContentLength = fileRequest.ContentLength;
//Get the Stream returned from the response
stream = fileResponse.GetResponseStream();
LengthToRead = stream.Length;
//Indicate the type of data being sent
Response.ContentType = "application/octet-stream";
//Name the file
Response.AddHeader("Content-Disposition", "attachment; filename=SolutionWizardDesktopClient.zip");
Response.AddHeader("Content-Length", fileResponse.ContentLength.ToString());
int length;
do
{
// Verify that the client is connected.
if (Response.IsClientConnected)
{
byte[] buffer = new Byte[bytesToRead];
// Read data into the buffer.
length = stream.Read(buffer, 0, bytesToRead);
// and write it out to the response's output stream
Response.OutputStream.Write(buffer, 0, length);
// Flush the data
Response.Flush();
//Clear the buffer
LengthToRead = LengthToRead - length;
}
else
{
// cancel the download if client has disconnected
LengthToRead = -1;
}
} while (LengthToRead > 0); //Repeat until no data is read
}
finally
{
if (stream != null)
{
//Close the input stream
stream.Close();
}
Response.End();
Response.Close();
}
return View("Failed");
}
due to size of the file, it is consumpting more memory which leads to performance issue.
After checking in iis log, the download process is taking 42 mb and 64 mb each respectively.
Thanks in advance
A better option would be to use FileResult instead of ActionResult:
Using this method means you don't have to load the file/bytes in memory before serving.
public FileResult Download()
{
var filePath = "file path in server";
return new FilePathResult(Server.MapPath(filePath), "application/zip");
}
Edit: For larger files FilePathResult will also fail.
Your best bet is probably Response.TransmitFile() then. I've used this on larger files (GBs) and had no issues before
public ActionResult Download()
{
var filePath = #"file path from server";
Response.Clear();
Response.ContentType = "application/octet-stream";
Response.AppendHeader("Content-Disposition", "filename=" + filePath);
Response.TransmitFile(filePath);
Response.End();
return Index();
}
From MSDN:
Writes the specified file directly to an HTTP response output stream,
without buffering it in memory.
Try setting the Transfer-Encoding header to chunked, and return an HttpResponseMessage with a PushStreamContent. Transfer-Encoding of chunked means that the HTTP response will not have a Content-Length header, and so the client will have to parse the chunks of the HTTP response as a stream. Note, I've never run across a client (browser, etc) that didn't handle Transfer Encoding chunked. You can read more at the link below.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Transfer-Encoding
[HttpGet]
public async Task<HttpResponseMessage> Download(CancellationToken token)
{
var response = new HttpResponseMessage(System.Net.HttpStatusCode.OK)
{
Content = new PushStreamContent(async (stream, context, transportContext) =>
{
try
{
using (var fileStream = System.IO.File.OpenRead("some path to MyBigDownload.zip"))
{
await fileStream.CopyToAsync(stream);
}
}
finally
{
stream.Close();
}
}, "application/octet-stream"),
};
response.Headers.TransferEncodingChunked = true;
response.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment")
{
FileName = "MyBigDownload.zip"
};
return response;
}
I had similar problem but I didn't have file on local disk, I had to download it from API (my MVC was like a proxy).
The key thing is to set Response.Buffer=false; on your MVC Action. I think #JanusPienaar's first solution should work with this.
My MVC action is:
public class HomeController : Controller
{
public async Task<FileStreamResult> Streaming(long RecordCount)
{
HttpClient Client;
System.IO.Stream Stream;
//This is the key thing
Response.Buffer=false;
Client = new HttpClient() { BaseAddress=new Uri("http://MyApi", };
Stream = await Client.GetStreamAsync("api/Streaming?RecordCount="+RecordCount);
return new FileStreamResult(Stream, "text/csv");
}
}
And my test WebApi (which generates the file) is:
public class StreamingController : ApiController
{
// GET: api/Streaming/5
public HttpResponseMessage Get(long RecordCount)
{
var response = Request.CreateResponse();
response.Content=new PushStreamContent((stream, http, transport) =>
{
RecordsGenerator Generator = new RecordsGenerator();
long i;
using(var writer = new System.IO.StreamWriter(stream, System.Text.Encoding.UTF8))
{
for(i=0; i<RecordCount; i++)
{
writer.Write(Generator.GetRecordString(i));
if(0==(i&0xFFFFF))
System.Diagnostics.Debug.WriteLine($"Record no: {i:N0}");
}
}
});
return response;
}
class RecordsGenerator
{
const string abc = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
char[] Chars = new char[14];//Ceiling(log26(2^63))
public string GetRecordString(long Record)
{
int iLength = 0;
long Div = Record, Mod;
do
{
iLength++;
Div=Math.DivRem(Div, abc.Length, out Mod);
//Save from backwards
Chars[Chars.Length-iLength]=abc[(int)Mod];
}
while(Div!=0);
return $"{Record} {new string(Chars, Chars.Length-iLength, iLength)}\r\n";
}
}
}
}
If RecordCount is 100000000, the file generated by TestApi is 1.56 GB. Neither WebApi nor MVC consumes so much memory.
There is the Rizwan Ansari post that worked for me:
There are situation when you need to provide download option for a big file located somewhere on server or generated at runtime. Below function could be used to download files of any size. Sometimes downloading big file throws exception OutOfMemoryException showing “Insufficient memory to continue execution of the program”. So this function also handle this situation by breaking down file in 1 MB chunks (can be customized by changing bufferSize variable).
Usage:
DownloadLargeFile("A big file.pdf", "D:\\Big Files\\Big File.pdf", "application/pdf", System.Web.HttpContext.Current.Response);
You can change "application/pdf" by the right Mime type
Download Function:
public static void DownloadLargeFile(string DownloadFileName, string FilePath, string ContentType, HttpResponse response)
{
Stream stream = null;
// read buffer in 1 MB chunks
// change this if you want a different buffer size
int bufferSize = 1048576;
byte[] buffer = new Byte[bufferSize];
// buffer read length
int length;
// Total length of file
long lengthToRead;
try
{
// Open the file in read only mode
stream = new FileStream(FilePath, FileMode.Open, FileAccess.Read, FileShare.Read);
// Total length of file
lengthToRead = stream.Length;
response.ContentType = ContentType;
response.AddHeader("Content-Disposition", "attachment; filename=" + HttpUtility.UrlEncode(DownloadFileName, System.Text.Encoding.UTF8));
while (lengthToRead > 0)
{
// Verify that the client is connected.
if (response.IsClientConnected)
{
// Read the data in buffer
length = stream.Read(buffer, 0, bufferSize);
// Write the data to output stream.
response.OutputStream.Write(buffer, 0, length);
// Flush the data
response.Flush();
//buffer = new Byte[10000];
lengthToRead = lengthToRead - length;
}
else
{
// if user disconnects stop the loop
lengthToRead = -1;
}
}
}
catch (Exception exp)
{
// handle exception
response.ContentType = "text/html";
response.Write("Error : " + exp.Message);
}
finally
{
if (stream != null)
{
stream.Close();
}
response.End();
response.Close();
}
}
you just have to Using IIS to Enable HTTP Downloads look at this link
and you just need to return the HTTP path of the file it will be download so fast and so easy.
Related
I'm trying to download a ZIP file in ASP NET MVC. I have done in ASP NET Webforms, and it works correcly, but I do the same in MVC and I don't get the same result, I tried the following:
public ActionResult Download()
{
using (ZipFile zip = new ZipFile())
{
zip.AddDirectory(Server.MapPath("~/Directories/hello"));
zip.Save(Server.MapPath("~/Directories/hello/sample.zip"));
return File(Server.MapPath("~/Directories/hello/sample.zip"),
"application/zip", "sample.zip");
}
}
But I get the binary data in screen, not the downloaded zip file why this is not working in MVC?
I have found that this does not work if I do it from a partial class, if I execute the download code from the Index and send the file if it works, why?
I use this to download files. In your view:
var ext = Path.GetExtension(path);
string contentType = GetMimeType(ext);
using (var stream = fileManager.GetStream(path))
{
var filename = fileManager.GetFileName(path);
var response = System.Web.HttpContext.Current.Response;
TransmitStream(stream, response, path, filename, contentType);
return new EmptyResult();
}
Where GetMimeType is a method that return known MIME types:
public static string GetMimeType(string extension, string defaultValue = "application/octet-stream")
{
if (extension == null)
{
throw new ArgumentNullException(nameof(extension));
}
if (!extension.StartsWith("."))
{
extension = "." + extension;
}
string mime;
return _mappings.TryGetValue(extension, out mime) ? mime : defaultValue;
}
With _mappings as:
private static readonly IDictionary<string, string> _mappings =
new Dictionary<string, string>(StringComparer.InvariantCultureIgnoreCase) {
{".323", "text/h323"},
{".3g2", "video/3gpp2"},
{".3gp", "video/3gpp"},
{".3gp2", "video/3gpp2"},
{".3gpp", "video/3gpp"},
{".7z", "application/x-7z-compressed"},
// Other types...
{".xwd", "image/x-xwindowdump"},
{".z", "application/x-compress"},
{".zip", "application/x-zip-compressed"},
};
And the TransmitStream:
public static void TransmitStream(
Stream stream, HttpResponse response, string fullPath, string outFileName = null, string contentType = null)
{
contentType = contentType ?? MimeMapping.GetMimeMapping(fullPath);
byte[] buffer = new byte[10000];
try
{
var dataToRead = stream.Length;
response.Clear();
response.ContentType = contentType;
if (outFileName != null)
{
response.AddHeader("Content-Disposition", "attachment; filename=" + outFileName);
}
response.AddHeader("Content-Length", stream.Length.ToString());
while (dataToRead > 0)
{
// Verify that the client is connected.
if (response.IsClientConnected)
{
// Read the data in buffer.
var length = stream.Read(buffer, 0, 10000);
// Write the data to the current output stream.
response.OutputStream.Write(buffer, 0, length);
// Flush the data to the output.
response.Flush();
buffer = new byte[10000];
dataToRead = dataToRead - length;
}
else
{
// Prevent infinite loop if user disconnects
dataToRead = -1;
}
}
}
catch (Exception ex)
{
throw new ApplicationException(ex.Message);
}
finally
{
response.Close();
}
}
Usually, if you want to download something i suggest you to use ContentResult
,transforming the file you want to download into a Base64 String and transforming it on the frontend using javascript with a Blob
Action
public ContentResult Download()
{
MemoryStream memoryStream = new MemoryStream();
file.SaveAs(memoryStream);
byte[] buffer = memoryStream.ToArray();
string fileAsString = Convert.ToBase64String(buffer);
return Content(file, "application/zip");
}
front end
var blob = new Blob([Base64ToBytes(response)], { type: "application/zip" });
var link = document.createElement("a");
document.body.appendChild(link);
link.setAttribute("type", "hidden");
link.href = url.createObjectURL(blob);
link.download = fileName;
link.click();
I have a below scenario.
Client send request to Server-1 for file download
Server-1 send request to Server-2 for file.
To make this work I need to create a mechanism where once client send request to the Server-1, Server-1 will request to Server-2 which will send file as response output-stream in chunks. Server-1 will send this file chunks to client browser continuously as it keep receiving from server-2.
I have done code as below, theoretically it looks fine but still it is not working.
It is not downloading entire file in client browser, it seems like last chunk is not transferred to the Server-1 or it is not downloading to client browser from Server-1
Server-1 Code (Where client request for File download)
private void ProccesBufferedResponse(HttpWebRequest webRequest, HttpContext context)
{
char[] responseChars = null;
byte[] buffer = null;
if (webRequest == null)
logger.Error("Request string is null for Perfios Docs Download at ProccesBufferedResponse()");
context.Response.Buffer = false;
context.Response.BufferOutput = false;
try
{
WebResponse webResponse = webRequest.GetResponse();
context.Response.ContentType = "application/pdf";
context.Response.AddHeader("Content-disposition", webResponse.Headers["Content-disposition"]);
StreamReader responseStream = new StreamReader(webResponse.GetResponseStream());
while (!responseStream.EndOfStream)
{
responseChars = new char[responseStream.ToString().ToCharArray().Length];
responseStream.Read(responseChars, 0, responseChars.Length);
buffer = Encoding.ASCII.GetBytes(responseChars);
context.Response.Clear();
context.Response.OutputStream.Write(buffer, 0, buffer.Length);
context.Response.Flush();
}
}
catch (Exception ex)
{
throw;
}
finally
{
context.Response.Flush();
context.Response.End();
}
}
Server-2 Code (Where Server-1 will send request for file)
private void DownloadInstaPerfiosDoc(int CompanyID, string fileName, string Foldertype)
{
string folderPath;
string FilePath;
int chunkSize = 1024;
int startIndex = 0;
int endIndex = 0;
int length = 0;
byte[] bytes = null;
DirectoryInfo dir;
folderPath = GetDocumentDirectory(CompanyID, Foldertype);
FilePath = folderPath + "\\" + fileName;
dir = new DirectoryInfo(folderPath);
HttpContext.Current.Response.Buffer = false;
HttpContext.Current.Response.BufferOutput = false;
if (dir.Exists && dir.GetFiles().Length > 0)
{
foreach (var file in dir.GetFiles(fileName))
{
FilePath = folderPath + "\\" + file.Name;
FileStream fsReader = new FileStream(FilePath, FileMode.Open, FileAccess.Read);
HttpContext.Current.Response.ContentType = "application/pdf";
HttpContext.Current.Response.AddHeader("Content-disposition", string.Format("attachment; filename = \"{0}\"", fileName));
int totalChunks = (int)Math.Ceiling((double)fsReader.Length / chunkSize);
for (int i = 0; i < totalChunks; i++)
{
startIndex = i * chunkSize;
if (startIndex + chunkSize > fsReader.Length)
endIndex = (int)fsReader.Length;
else
endIndex = startIndex + chunkSize;
length = (int)endIndex - startIndex;
bytes = new byte[length];
fsReader.Read(bytes, 0, bytes.Length);
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.OutputStream.Write(bytes, 0, bytes.Length);
HttpContext.Current.Response.Flush();
}
}
}
}
Please help me to resolve this issue.
It is possible and feasible. I'll give a pseudo procedure for you to understand the overall idea.
Server1
download action gets hit
create a request to server2
get the response stream of your server2 request
read the response stream in desired chunk sizes until it's consumed completely
write each chunk (as soon as you read) to current response stream
Server2
download action gets hit
write your stream onto your current response stream however you like
I am trying to download a file from the server to a folder that the client chooses onto their machine. But I keep getting the error could not find part of the path
For example the DownloadLocation could be C:/myfolder
Code:
FileName = comp.DownloadLocation + "/" + "/purchase" + ".csv";
regularfilename = "purchase.csv";
byte[] buffer;
using (FileStream fileStream = new FileStream(FileName, FileMode.Open))
{
int fileSize = (int)fileStream.Length;
buffer = new byte[fileSize];
fileStream.Read(buffer, 0, (int)fileSize);
}
Response.Clear();
Response.Buffer = true;
Response.BufferOutput = true;
Response.ContentType = "application/x-download";
Response.AddHeader("Content-Disposition", "attachment; filename=" + regularfilename);
Response.CacheControl = "public";
Response.OutputStream.Write(buffer, 0, buffer.Length);
Response.Flush();
Response.Clear();
Response.End();
Possibly not the answer however this is something you should definitely check:
FileName = comp.DownloadLocation + "/" + "/purchase" + ".csv";
You probably want to remove the / before purchase or drop the + "/" + completely as that's going to build up a path in the format:
"somepath//purchase.csv"
May well be casuing your issue. See if it makes a difference.
If you need a client app to check the Web API server to see if the server's version of a file is newer than the version the client has, you can set up the server to do so by performing these steps:
Add the appropriate method to a Repository Interface, such as:
HttpResponseMessage GetCervezaBeberUpdate(string clientVersion);
Add this to the corresponding Controller class (where the Controller name is "HenryFieldingController"):
[Route("api/HenryFielding/GetUpdatedCervezaBeber")]
public HttpResponseMessage GetUpdate(string clientVersion)
{
return _tomJonesRepository.GetCervezaBeberUpdate(clientVersion);
}
Add the appropriate method to the concrete Repository class, such as:
public HttpResponseMessage GetCervezaBeberUpdate(string clientVersion)
{
var binaryFilePath = HostingEnvironment.MapPath(#"~\App_Data\CervezaBeber.exe");
FileVersionInfo currentVersion = FileVersionInfo.GetVersionInfo(binaryFilePath);
if (!ServerFileIsNewer(clientVersion, currentVersion))
{
result = new HttpResponseMessage(HttpStatusCode.NoContent);
}
else
{
var stream = new FileStream(binaryFilePath, FileMode.Open);
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
result.Content = new StreamContent(stream);
result.Content.Headers.ContentType =
new MediaTypeHeaderValue("application/octet-stream");
}
return result;
}
Also add the method in that Repository class that GetCervezaBeberUpdate() calls:
private bool ServerFileIsNewer(string clientFileVersion, FileVersionInfo serverFile)
{
Version client = new Version(clientFileVersion);
Version server = new Version(string.Format("{0}.{1}.{2}.{3}", serverFile.FileMajorPart, serverFile.FileMinorPart, serverFile.FileBuildPart, serverFile.FilePrivatePart));
return server > client;
}
Now the client can call this by passing a URI like this to the server:
http://<servername>:<portnumber>/api/
<controllername>?clientVersion=<clientversionquad>
Or, for a more literal example, in the event your server's name is "Platypus", the port to use is 4242, the Controller is named HenryFieldingController, and the version of the file the client currently has is 3.1.4.1:
http://Platypus:4242/api/HenryFielding?clientVersion=3.1.4.1
As a free-as-in-beer (you/I wish!) premium, here's some code the client can use to save the server's response to a file (passing this a URI such as shown above):
private void DownloadTheFile(string uri)
{
var outputFileName = "Platypus.exe";
var webRequest = (HttpWebRequest)WebRequest.Create(uri);
var webResponse = (HttpWebResponse)webRequest.GetResponse();
string statusCode = webResponse.StatusCode.ToString();
// From here on (including the CopyStream() method) derived from Jon Skeet's
// answer at http://stackoverflow.com/questions/411592/how-do-i-save-a-stream-to-a-file
if (statusCode == "NoContent")
{
MessageBox.Show("You already have the newest available version.");
}
else
{
var responseStream = webResponse.GetResponseStream();
using (Stream file = File.Create(outputFileName))
{
CopyStream(responseStream, file);
MessageBox.Show(string.Format("New version downloaded to {0}", outputFileName));
}
}
}
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[8 * 1024];
int len;
while ((len = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, len);
}
}
I'm working on a quick wrapper for the skydrive API in C#, but running into issues with downloading a file. For the first part of the file, everything comes through fine, but then there start to be differences in the file and shortly thereafter everything becomes null. I'm fairly sure that it's just me not reading the stream correctly.
This is the code I'm using to download the file:
public const string ApiVersion = "v5.0";
public const string BaseUrl = "https://apis.live.net/" + ApiVersion + "/";
public SkyDriveFile DownloadFile(SkyDriveFile file)
{
string uri = BaseUrl + file.ID + "/content";
byte[] contents = GetResponse(uri);
file.Contents = contents;
return file;
}
public byte[] GetResponse(string url)
{
checkToken();
Uri requestUri = new Uri(url + "?access_token=" + HttpUtility.UrlEncode(token.AccessToken));
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(requestUri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
Stream responseStream = response.GetResponseStream();
byte[] contents = new byte[response.ContentLength];
responseStream.Read(contents, 0, (int)response.ContentLength);
return contents;
}
This is the image file I'm trying to download
And this is the image I am getting
These two images lead me to believe that I'm not waiting for the response to finish coming through, because the content-length is the same as the size of the image I'm expecting, but I'm not sure how to make my code wait for the entire response to come through or even really if that's the approach I need to take.
Here's my test code in case it's helpful
[TestMethod]
public void CanUploadAndDownloadFile()
{
var api = GetApi();
SkyDriveFolder folder = api.CreateFolder(null, "TestFolder", "Test Folder");
SkyDriveFile file = api.UploadFile(folder, TestImageFile, "TestImage.png");
file = api.DownloadFile(file);
api.DeleteFolder(folder);
byte[] contents = new byte[new FileInfo(TestImageFile).Length];
using (FileStream fstream = new FileStream(TestImageFile, FileMode.Open))
{
fstream.Read(contents, 0, contents.Length);
}
using (FileStream fstream = new FileStream(TestImageFile + "2", FileMode.CreateNew))
{
fstream.Write(file.Contents, 0, file.Contents.Length);
}
Assert.AreEqual(contents.Length, file.Contents.Length);
bool sameData = true;
for (int i = 0; i < contents.Length && sameData; i++)
{
sameData = contents[i] == file.Contents[i];
}
Assert.IsTrue(sameData);
}
It fails at Assert.IsTrue(sameData);
This is because you don't check the return value of responseStream.Read(contents, 0, (int)response.ContentLength);. Read doesn't ensure that it will read response.ContentLength bytes. Instead it returns the number of bytes read. You can use a loop or stream.CopyTo there.
Something like this:
WebResponse response = request.GetResponse();
MemoryStream m = new MemoryStream();
response.GetResponseStream().CopyTo(m);
byte[] contents = m.ToArray();
As LB already said, you need to continue to call Read() until you have read the entire stream.
Although Stream.CopyTo will copy the entire stream it does not ensure that read the number of bytes expected. The following method will solve this and raise an IOException if it does not read the length specified...
public static void Copy(Stream input, Stream output, long length)
{
byte[] bytes = new byte[65536];
long bytesRead = 0;
int len = 0;
while (0 != (len = input.Read(bytes, 0, Math.Min(bytes.Length, (int)Math.Min(int.MaxValue, length - bytesRead)))))
{
output.Write(bytes, 0, len);
bytesRead = bytesRead + len;
}
output.Flush();
if (bytesRead != length)
throw new IOException();
}
protected void downloadFunction(string filename)
{
string filepath = #"D:\XtraFiles\" + filename;
string contentType = "application/x-newton-compatible-pkg";
Stream iStream = null;
// Buffer to read 1024K bytes in chunk
byte[] buffer = new Byte[1048576];
// Length of the file:
int length;
// Total bytes to read:
long dataToRead;
try
{
// Open the file.
iStream = new FileStream(filepath, FileMode.Open, FileAccess.Read, FileShare.Read);
// Total bytes to read:
dataToRead = iStream.Length;
HttpContext.Current.Response.ContentType = contentType;
HttpContext.Current.Response.AddHeader("Content-Disposition", "attachment; filename=" + HttpUtility.UrlEncode(filename, System.Text.Encoding.UTF8));
// Read the bytes.
while (dataToRead > 0)
{
// Verify that the client is connected.
if (HttpContext.Current.Response.IsClientConnected)
{
// Read the data in buffer.
length = iStream.Read(buffer, 0, 10000);
// Write the data to the current output stream.
HttpContext.Current.Response.OutputStream.Write(buffer, 0, length);
// Flush the data to the HTML output.
HttpContext.Current.Response.Flush();
buffer = new Byte[10000];
dataToRead = dataToRead - length;
}
else
{
//prevent infinite loop if user disconnects
dataToRead = -1;
}
}
}
catch (Exception ex)
{
// Trap the error, if any.
HttpContext.Current.Response.Write("Error : " + ex.Message + "<br />");
HttpContext.Current.Response.ContentType = "text/html";
HttpContext.Current.Response.Write("Error : file not found");
}
finally
{
if (iStream != null)
{
//Close the file.
iStream.Close();
}
HttpContext.Current.Response.End();
HttpContext.Current.Response.Close();
}
}
My donwload function is working perfect, but when users are downloading the browser cant see the total file size of the download.
So now the browser says eq. Downloading 8mb of ?, insted of Downloading 8mb of 142mb.
What have i missed?
The Content-Length header seems to be what you are missing.
If you set this the browser will then know how much to expect. Otherwise it will just keep going til you stop sending data and it won't know how long it is until the end.
Response.AddHeader("Content-Length", iStream.Length);
You may also be interested in Response.WriteFile whcih can provide an easier way to send a file to a client without having to worry about streams yourself.
You need to send a ContentLength-Header:
HttpContext.Current.Response.AddHeader(HttpRequestHeader.ContentLength, iStream.Length);