I have a 5Mb pdf on the server dowloading this file using a writeFile gives me a 15Mb download, where as the transmitfile gives the correct 5Mb filesize...
Is this due to some sort of uncompression into memory on the server for the writeFile? Just wonder if anyone had seen the same thing happening...
(ps only noticed it since we went to iis7??)
code being...
if (File.Exists(filepath))
{
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.ContentType = "application/octet-stream";
HttpContext.Current.Response.AddHeader("content-disposition","attachment;filename=\""+Path.GetFileName(filepath)+"\"");
HttpContext.Current.Response.AddHeader("content-length", new FileInfo(filepath).Length.ToString());
//HttpContext.Current.Response.WriteFile(filepath);
HttpContext.Current.Response.TransmitFile(filepath);
HttpContext.Current.Response.Flush();
HttpContext.Current.Response.Close();
}
TransmitFile - Writes the specified file directly to an HTTP response output stream without buffering it in memory.
WriteFile - Writes the specified file directly to an HTTP response output stream.
I would say the difference occurs because Transmit file doesn't buffer it. Write file is using buffering (Afiak), basically temporarily holding the data before transmitting it, as such it cannot guess the accurate file size because its writing it in chunks.
You can understand by following definition.
Response.TransmitFile VS Response.WriteFile:
TransmitFile: This method sends the file to the client without loading it to the Application memory on the server. It is the ideal way to use it if the file size being download is large.
WriteFile: This method loads the file being download to the server's memory before sending it to the client. If the file size is large, you might the ASPNET worker process might get restarted.*
Reference :- TransmitFile VS WriteFile
Related
Is it possible to upload a file asynchronously to a website using ASP MVC 4?
In my current solution, when the user uploads a file, the post-back checks the number of bytes in the file stream - if it's greater than 8024 bytes, we put an error in TempData and return back to the same page indicating that a file is too large. If it is under 8024, then I create a new Task that will funnel bytes from the FileStream into a new destination file, somewhere on the server.
However, my boss correctly pointed out that at that point it may be too late to warn the user that their file is too large.
We were wondering whether it was possible to check the length of the file on the server side before reading any bytes from the user (and thus without the user having to wait for the file to fully upload before finding out that their file is too large - imagine trying to upload an 8125kb file on a 3MBit connection only to find out that actually the file you gave was 1 byte too large, sorry, try again.
The reason I asked if we could upload a file asynchronously is because this seems very much like asynchronous streaming to me - immediately give a response, regardless of the actual progress of the upload.
We're aware you can use IIS (through the httpHandler attribute in the configuration) to prevent requests over X bytes, but we want to handle error handling ourselves.
I have created a simple WCF service to prototype file uploading. The service:
[ServiceContract]
public class Service1
{
[OperationContract]
[WebInvoke(Method = "POST", UriTemplate = "/Upload")]
public void Upload(Stream stream)
{
using (FileStream targetStream = new FileStream(#"C:\Test\output.txt", FileMode.Create, FileAccess.Write))
{
stream.CopyTo(targetStream);
}
}
}
It uses webHttpBinding with transferMode set to "Streamed" and maxReceivedMessageSize, maxBufferPoolSize and maxBufferSize all set to 2GB. httpRuntime has maxRequestLength set to 10MB.
The client issues HTTP requests in the following way:
HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(#"http://.../Service1.svc/Upload");
request.Method = "POST";
request.SendChunked = true;
request.AllowWriteStreamBuffering = false;
request.ContentType = MediaTypeNames.Application.Octet;
using (FileStream inputStream = new FileStream(#"C:\input.txt", FileMode.Open, FileAccess.Read))
{
using (Stream outputStream = request.GetRequestStream())
{
inputStream.CopyTo(outputStream);
}
}
Now, finally, what's wrong:
When uploading the file 100MB big, the server returns HTTP 400 (Bad request). I've tried to enable WCF tracing, but it shows no error. When I increase httpRuntime.maxRequestLength to 1GB, the file gets uploaded without problems. The MSDN says that maxRequestLength "specifies the limit for the input stream buffering threshold, in KB".
This leads me to believe that the whole file (all 100MB of it) is first stored in "input stream buffer" and only then it is available to my Upload method on server. I can actually see that the size of file on server does not gradually increase (as I would expect), instead, in the moment it is created it is already 100MB big.
The question: How can I get this to work so that the "input stream buffer" is reasonably small (say, 1MB) and when it overflows, my Upload method gets called? In other words, I want the upload to be truly streamed without having to buffer the whole file anywhere.
EDIT:
I now discovered the httpRuntime contains another setting that is relevant here - requestLengthDiskThreshold. It seems that when the input buffer grows beyond this threshold, it is no longer stored in memory, but instead, on filesystem. So at least the whole 100MB big file is not kept in memory (this is what I was most afraid of), however, I still would like to know whether there is some way to avoid this buffer altogether.
If you are using .NET 4 and hosting your service in IIS7+, you may be affected an ASP.NET bug which is described in the following blog post:
http://blogs.microsoft.co.il/blogs/idof/archive/2012/01/17/what-s-new-in-wcf-4-5-improved-streaming-in-iis-hosting.aspx
Basically, for streamed requests, the ASP.NET handler in IIS will buffer the whole request before handing over control to WCF. And this handler obeys the maxRequestLength limit.
As far as I know, there is no workaround for the bug and you have the following options:
upgrade to .NET 4.5
self-host your service instead of using IIS
use a binding that is not based on HTTP, so that the ASP.NET handler is not involved
This may be a bug in the streaming implementation. I found a MSDN article that suggests doing exactly what you are describing at http://social.msdn.microsoft.com/Forums/en-US/wcf/thread/fb9efac5-8b57-417e-9f71-35d48d421eb4/. Unfortunately the Microsoft employee suggesting the fix found a bug in the implementation and didn't follow up with details on a fix.
That said it looks like the implementation is broken which you could test by profiling your code with a memory profiler and verifying whether or not the entire file is being stored in memory. If the entire file is being stored in memory, you won't be able to fix this issue, unless somebody finds a configuration issue with your code.
That said, while using requestLengthDiskThreshold could technically work, it will dramatically increase your write times as each file will have to be written first as temp data, read from temp data, written again as final, and finally the temp data deleted. As you have already said you are dealing with extremely large files so I doubt such a solution is acceptable.
Your best bet is to use a chunking framework and manually reconstruct the file. I found instructions on how to write such logic at http://aspilham.blogspot.com/2011/03/file-uploading-in-chunks-using.html but have not had the time to check it for accuracy.
I'm sorry I can't tell you why your code isn't working as documented, but something similar to the 2nd example should be able to work without ballooning your memory footprint.
I need to download a zip file created in realtime from a webservice.
Let me explain.
I am developing a web application that uses a SoapXml webservice. There is the Export function in the webservice that returns a temporary url to download the file. Upon request to download, the server creates the file and makes it available for download after a few seconds.
I'm trying to use
webClient.DownloadFile(url, #"c:/etc../")
This function downloads the file and saves it to me 0kb. is too fast! The server does not have time to create the file. I also tried to put
webClient.OpenRead(url);
System.Threading.Thread.Sleep(7000);
webClient.DownloadFile(url, #"c:/etc../");
but does not work.
In debug mode if I put a BREAK POINT on webClient.DownloadFile and I start again after 3, 4 seconds, the server have the time to create the file and I have a full download.
The developers of the webservice suggested me to use "polling" on the url until the file gets ready for the download. how does it work?
How can I do to solve my problem? (I also tried to DownloadFile Asynchronous mode )
I have similar mechanism in my application, and it works perfectly. WebClient does request, and waits, because server is creating response(file). If WebClient downloads 0kb that means that server responded to request with some empty response. This may not be a bug, but a design. If creating file takes long time, this method could result in timeouts. On the other hand if creating file takes short time, server side should respond with file(making WebClient hang on request, till the file is ready). I would try to discuss this matter with other developers and maybe redesign "file generator".
EDIT: Pooling means making requests in loop, for example every 2 seconds. I'm using DownloadData because it's useless, and resource consuming, to save empty file every time, which DownloadFile does.
public void PoolAndDownloadFile(Uri uri, string filePath)
{
WebClient webClient = new WebClient();
byte[] downloadedBytes = webClient.DownloadData(uri);
while (downloadedBytes.Length == 0)
{
Thread.Sleep(2000);
downloadedBytes = webClient.DownloadData(uri);
}
Stream file = File.Open(filePath, FileMode.Create);
file.Write(downloadedBytes, 0, downloadedBytes.Length);
file.Close();
}
I want to send the files in filestreams of size more than 800 MB from controller to UI.
Is there any method to send the filestream from controller to browser in chunks.
because if I use
File(downloadStream, "application/octet-stream", fileName);
is taking system memory and not able to send it to UI.
Please suggest the most efficient way of sending the filestream in chunks .
Use FilePathResult, which uses HttpResponse.TransmitFile to write the file directly to the http. This method doesn't buffer the file in memory on the server, so it should be a better option for sending larger files.
Check out its implementation here
My client is uploading file more then 1 GB through application. I know i can only upload only 100mb using asp.net MVC application.
public static byte[] ReadStream(Stream st)
{
st.Position = 0;
byte[] data = new byte[st.Length];
.
.
.
.
}
i am getting error at byte[] data = new byte[st.Length]; because st.Length=1330768612
Error - "Exception of type 'System.OutOfMemoryException' was thrown."
Is there any way i can upload more then 1gb file?
Why we can define maxRequestLength= 0 - 2097151 in webconfig,
IMO you need to use the right tool for the job. Http was simply not intended to transfer large files like this. Why dont you use ftp instead, and maybe you could then build a web interface around that.
The error shown to you suggests the server has not enough memory to process the file in memory. Validate if your server has enough memory to allocate such a big array/file.
You could also try to process chuncks of the stream. The fact that you get an out of memory suggests that the file is sent to the server, but the server cannot process the file.
I really think it has to do with the size of the array you allocate. It just won't fit in the memory of you machine (of in the memory assigned to .NET).
The error says that you run out of memory while trying to allocate a 1GB byte array in memory. This is not related to MVC. You should also note that the memory limit for 32bit processes is 2GB. If your server runs a 32bit OS and you allocate 1GB of that for a single upload you will quickly deplete the available memory.
Instead of trying to read the entire stream in memory, use Stream.Read to read the data in chuncks using a reasonably sized buffer and store the chuncks to a file stream with a Write call. Not only will you avoid OutOfMemoryExceptions, your code will also run much faster, because you won't have to wait to load the entire 1GB before storing it to a file.
The code can be as simple as this:
public static void SaveStream(Stream st,string targetFile)
{
byte[] inBuffer = new byte[10000];
using(FileStream outStream=File.Create(targetFile,20000))
using (BinaryWriter wr = new BinaryWriter(outStream))
{
st.Read(inBuffer, 0, inBuffer.Length);
wr.Write(inBuffer);
}
}
You can tweak the buffer sizes to balance throughput (how quickly you upload and save) vs scalability (how many clients you can handle).
I remember on an older project we had to work out a way to allow the user to upload a 2-4gb file for our ASP.NET web application.
If I recall correctly, we used the 'File Upload Control' and edited the web.config to allow for a greater file size:
<system.web>
<httpRuntime executionTimeout="2400" maxRequestLength="40960" />
</system.web>
Another option would be to use the HttpModule:
http://dotnetslackers.com/Community/blogs/haissam/archive/2008/09/12/upload-large-files-in-asp-net-using-httpmodule.aspx
I would suggest using FTP or write a small desktop application.
HTTP was never intended to send such large files.
Here's a Microsoft Knowledge base answer for you