Uploading a file through ASP MVC asyncronously? - c#

Is it possible to upload a file asynchronously to a website using ASP MVC 4?
In my current solution, when the user uploads a file, the post-back checks the number of bytes in the file stream - if it's greater than 8024 bytes, we put an error in TempData and return back to the same page indicating that a file is too large. If it is under 8024, then I create a new Task that will funnel bytes from the FileStream into a new destination file, somewhere on the server.
However, my boss correctly pointed out that at that point it may be too late to warn the user that their file is too large.
We were wondering whether it was possible to check the length of the file on the server side before reading any bytes from the user (and thus without the user having to wait for the file to fully upload before finding out that their file is too large - imagine trying to upload an 8125kb file on a 3MBit connection only to find out that actually the file you gave was 1 byte too large, sorry, try again.
The reason I asked if we could upload a file asynchronously is because this seems very much like asynchronous streaming to me - immediately give a response, regardless of the actual progress of the upload.
We're aware you can use IIS (through the httpHandler attribute in the configuration) to prevent requests over X bytes, but we want to handle error handling ourselves.

Related

efficiently pass files from webserver to file server

i have multiple web server and one central file server inside my data center.
and all my Web server store the user uploaded files into central internal file server.
i would like to know what is the best way to pass the file from web server to file server in this case?
as suggested i try to add more details to question:
the solution i came up was:
after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
Is your file server just another windows/linux server or is it a NAS device. I can suggest you number of approaches based on your requirement. The question is why d you want to use HTTP protocol when you have much better way to transfer files between servers.
HTTP protocol is best when you send text data as HTTP itself is based
on text.From the client side to Server side HTTP is used as that is
the only available option for you by our browsers .But
between your servers ,I feel you should use SMB protocol(am assuming
you are using windows as it is tagged for IIS) to move data.It will
be orders of magnitude faster as much more efficient to transfer the same data over SMB vs
HTTP.
And for SMB protocol,you do not have to write any code or complex scripts to do this.As provided by one of the answers above,you can just issue a simple copy command and it will happen for you.
So just summarizing the options for you (based on my preference)
Let the files get upload to some location on the each IIS web server e.g C:\temp\UploadedFiles . You can write a simple 2-3 line powershell script which will copy the files from this C:\temp\UploadedFiles to \FileServer\Files\UserID\\uploaded.file .This same powershell script can delete the file once it is moved to the other server successfully.
E.g script can be this simple and easy to make it as windows scheduled task
$Destination = "\\FileServer\Files\UserID\<FILEGUID>\"
New-Item -ItemType directory -Path $Destination -Force
Copy-Item -Path $Source\*.* -Destination $Destination -Force
This script can be modified to suit your needs to delete the files if it is done :)
In the Asp.net application ,you can directly save the file to network location.So in the SaveAs call,you can give the network path itself. This you have to make sure this network share is accessible for the IIS worker process and also has write permission.Also in my understanding asp.net gets the file saved to temporary location first (you do not have control on this if you are using the asp.net HttpPostedFileBase or FormCollection ). More details here
You can even run this in an async so that your requests will not be blocked
if (FileUpload1.HasFile)
// Call to save the file.
FileUpload1.SaveAs("\\networkshare\filename");
https://msdn.microsoft.com/en-us/library/system.web.ui.webcontrols.fileupload.saveas(v=vs.110).aspx
3.Save the file the current way to local directory and then use HTTP POST. This is worst design possible as you are first going to read the contents and then transfer it as chunked to other server where you have to setup another webservice which recieves the file.The you have to read the file from request stream and again save it to your location. Am not sure if you need to do this.
let me know if you need more details on any of the listed method.
Or you just write it to a folder on the webservers, and create a scheduled task that moves the files to the file server every x minutes (e.g. via robocopy). This also makes sure your webservers are not reliant on your file server.
Assuming that you have an HttpPostedFileBase then the best way is just to call the .SaveAs() method.
You need the UNC path to the file server and that is it. The simplest version would look something like this:
public void SaveFile(HttpPostedFileBase inputFile) {
var saveDirectory = #"\\fileshare\application\directory";
var savePath = Path.Combine(saveDirectory, inputFile.FileName);
inputFile.SaveAs(savePath);
}
However, this is simplistic in the extreme. Take a look at the OWASP Guidance on Unrestricted File Uploads. File uploads can be the source of many vulnerabilities in your application.
You also need to make sure that the web application has access to the file share. Take a look at this answer
Creating a file on network location in asp.net
for more info. Generally the best solution is to run the application pool with a special identity which is only used to access the folder.
the solution i came up was: after receiving files from user at web server, i should just do an Http Post to the file server. but i think there is some thing wrong with this because it causes large files to be entirely loaded into memory twice: (once at web server and once at file server)
I would suggest not posting the file at once - it's then full in memory, which is not needed.
You could post the file in chunks, by using ajax. When a chunk receives at your server, just add it to the file.
With the File Reader API, you could read the file in chunks in Javascript.
Something like this:
/** upload file in chunks */
function upload(file) {
var chunkSize = 8000;
var start = 0;
while (start < file.size) {
var chunk = file.slice(start, start + chunkSize);
var xhr = new XMLHttpRequest();
xhr.onload = function () {
//check if all chunks are and then send filename or send in in the first/last request.
};
xhr.open("POST", "/FileUpload", true);
xhr.send(chunk);
start = end;
}
}
It can be implemented in different ways. If you are storing files in files server as files in file system. And all of your servers inside the same virtual network
Then will be better to create shared folder on your file server and once you received files at web server, just save this file in this shared folder directly on file server.
Here the instructions how to create shared folders: https://technet.microsoft.com/en-us/library/cc770880(v=ws.11).aspx
Just map a drive
I take it you have a means of saving the uploaded file on the web server's local filesystem. The question pertains to moving the file from the web server (which is probably one of many load-balanced nodes) to a central file system all web servers can access it.
The solution to this is remarkably simple.
Let's say you are currently saving the files some folder, say c:\uploadedfiles. The path to uploadedfiles is stored in your web.config.
Take the following steps:
Sign on as the service account under which your web site executes
Map a persistent network drive to the desired location, e.g. from command line:
NET USE f: \\MyFileServer\MyFileShare /user:SomeUserName password
Modify your web.config and change c:\uploadedfiles to f:\
Ta da, all done.
Just make sure the drive mapping is persistent, and make sure you use a user with adequate permissions, and voila.

How do I pass a Stream from a Web API to Azure Blob Storage without temp files?

I am working on an application where file uploads happen often, and can be pretty large in size.
Those files are being uploaded to a Web API, which will then get the Stream from the request, and pass it on to my storage service, that then uploads it to Azure Blob Storage.
I need to make sure that:
No temp files are written on the Web API instance
The request stream is not fully read into memory before passing it on to the storage service (to prevent OutOfMemoryExceptions).
I've looked at this article, which describes how to disable input stream buffering, but because many file uploads from many different users happen simultaneously, it's important that it actually does what it says on the tin.
This is what I have in my controller at the moment:
if (this.Request.Content.IsMimeMultipartContent())
{
var provider = new MultipartMemoryStreamProvider();
await this.Request.Content.ReadAsMultipartAsync(provider);
var fileContent = provider.Contents.SingleOrDefault();
if (fileContent == null)
{
throw new ArgumentException("No filename.");
}
var fileName = fileContent.Headers.ContentDisposition.FileName.Replace("\"", string.Empty);
// I need to make sure this stream is ready to be processed by
// the Azure client lib, but not buffered fully, to prevent OoM.
var stream = await fileContent.ReadAsStreamAsync();
}
I don't know how I can reliably test this.
EDIT: I forgot to mention that uploading directly to Blob Storage (circumventing my API) won't work, as I am doing some size checking (e.g. can this user upload 500mb? Has this user used his quota?).
Solved it, with the help of this Gist.
Here's how I am using it, along with a clever "hack" to get the actual file size, without copying the file into memory first. Oh, and it's twice as fast
(obviously).
// Create an instance of our provider.
// See https://gist.github.com/JamesRandall/11088079#file-blobstoragemultipartstreamprovider-cs for implementation.
var provider = new BlobStorageMultipartStreamProvider ();
// This is where the uploading is happening, by writing to the Azure stream
// as the file stream from the request is being read, leaving almost no memory footprint.
await this.Request.Content.ReadAsMultipartAsync(provider);
// We want to know the exact size of the file, but this info is not available to us before
// we've uploaded everything - which has just happened.
// We get the stream from the content (and that stream is the same instance we wrote to).
var stream = await provider.Contents.First().ReadAsStreamAsync();
// Problem: If you try to use stream.Length, you'll get an exception, because BlobWriteStream
// does not support it.
// But this is where we get fancy.
// Position == size, because the file has just been written to it, leaving the
// position at the end of the file.
var sizeInBytes = stream.Position;
Voilá, you got your uploaded file's size, without having to copy the file into your web instance's memory.
As for getting the file length before the file is uploaded, that's not as easy, and I had to resort to some rather non-pleasant methods in order to get just an approximation.
In the BlobStorageMultipartStreamProvider:
var approxSize = parent.Headers.ContentLength.Value - parent.Headers.ToString().Length;
This gives me a pretty close file size, off by a few hundred bytes (depends on the HTTP header I guess). This is good enough for me, as my quota enforcement can accept a few bytes being shaved off.
Just for showing off, here's the memory footprint, reported by the insanely accurate and advanced Performance Tab in Task Manager.
Before - using MemoryStream, reading it into memory before uploading
After - writing directly to Blob Storage
I think a better approach is for you to go directly to Azure Blob Storage from your client. By leveraging the CORS support in Azure Storage you eliminate load on your Web API server resulting in better overall scale for your application.
Basically, you will create a Shared Access Signature (SAS) URL that your client can use to upload the file directly to Azure storage. For security reasons, it is recommended that you limit the time period for which the SAS is valid. Best practices guidance for generating the SAS URL is available here.
For your specific scenario check out this blog from the Azure Storage team where they discuss using CORS and SAS for this exact scenario. There is also a sample application so this should give you everything you need.

How To Do a Server To Server File Transfer without any user interaction?

In my scenario, users are able to upload zip files to a.example.com
I would love to create a "daemon" which in specified time intervals will move-transfer any zip files uploaded by the users from a.example.com to b.example.com
From the info i gathered so far,
The daemon will be an .ashx generic handler.
The daemon will be triggered at the specified time intervals via a plesk cron job
The daemon (thanks to SLaks) will consist of two FtpWebRequest's (One for reading and one for writing).
So the question is how could i implement step 3?
Do i have to read into to a memory() array the whole file and try to write that in b.example.com ?
How could i write the info i read to b.example.com?
Could i perform reading and writing of the file at the same time?
No i am not asking for the full code, i just can figure out, how could i perform reading and writing on the fly, without user interaction.
I mean i could download the file locally from a.example.com and upload it at b.example.com but that is not the point.
Here is another solution:
Let ASP.Net in server A receive the file as a regular file upload and store it in directory XXX
Have a windows service in server A that checks directory XXX for new files.
Let the window service upload the file to server B using HttpWebRequest
Let server B receive the file using a regular ASP.Net file upload page.
Links:
File upload example (ASP.Net): http://msdn.microsoft.com/en-us/library/aa479405.aspx
Building a windows service: http://www.codeproject.com/KB/system/WindowsService.aspx
Uploading files using HttpWebRequest: Upload files with HTTPWebrequest (multipart/form-data)
Problems you gotto solve:
How to determine which files to upload to server B. I would use Directory.GetFiles in a Timer to find new files instead of using a FileSystemWatcher. You need to be able to check if a file have been uploaded previously (delete it, rename it, check DB or whatever suits your needs).
Authentication on server B, so that only you can upload files to it.
To answer your questions - yes you can read and write the files at the same time.
You can open an FTPWebRequest to ServerA and a FTPWebRequest to ServerB. On the FTPWebRequest to serverA you would request the file, and get the ResponseStream. Once you have the ResponseStream, you would read a chunk of bytes at a time, and write that chunck of bytes to the serverB RequestStream.
The only memory you would be using would be the byte[] buffer in your read/write loop. Just keep in mind though that the underlying implementation of FTPWebRequest will download the complete FTP file before returning the response stream.
Similarly, you cannot send your FTPWebRequest to upload the new file until all bytes have been written. In effect, the operations will happen synchronously. You will call GetResponse which won't return until the full file is available, and only then can you 'upload' the new file.
References:
FTPWebRequest
Something you have to take into consideration is that a long running web requests (your .ashx generic handler) may be killed when the AppDomain refreshes. Therefore you have to implement some sort of atomic transaction logic in your code, and you should handle sudden disconnects and incomplete FTP transfers if you go that way.
Did you have a look at Windows Azure before? This cloud platform supports distributed file system, and has built-in atomic transactions. Plus it scales nicely, should your service grow fast.
I would make it pretty simple. The client program uploads the file to server A. This can be done very easily in C# with an FtpWebRequest.
http://msdn.microsoft.com/en-us/library/ms229715.aspx
I would then have a service on server A that monitors the directory where files are uploaded. When a file is uploaded to that directory or on certain intervals it simply copies files over to server B. Again this can be done via Ftp or other means if they're on the same network.
you need some listener on the target domain, ftp server running there, and on the client side you will use System.Net.WebClient and UploadFile or UploadFileAsync to send the file. is that what you are asking?
It sounds like you don't really need a webservice or handler. All you need is a program that will, at regular intervals, open up an FTP connection to the other server and move the files. This can be done by any .NET program with the System.WebClient library, doesn't have to be a "web app". This other program could be a service, which could handle its own timing, or a simple app run by your cron job. If you need this to go two ways, for instance if the two servers are mirrors, you simply have the same app on the second box doing the same thing to upload files over to the first.
If both machines are in the same domain, couldn't you just do file replication at the OS level?
DFS
set up keys if you are using linux based systems:
http://compdottech.blogspot.com/2007/10/unix-login-without-password-setting.html
Once you have the keys working, you can copy the file from system A to system B by writing regular shell scripts that would not need any user interactions.

How can i upload file size 1.25GB in ASP.NET MVC

My client is uploading file more then 1 GB through application. I know i can only upload only 100mb using asp.net MVC application.
public static byte[] ReadStream(Stream st)
{
st.Position = 0;
byte[] data = new byte[st.Length];
.
.
.
.
}
i am getting error at byte[] data = new byte[st.Length]; because st.Length=1330768612
Error - "Exception of type 'System.OutOfMemoryException' was thrown."
Is there any way i can upload more then 1gb file?
Why we can define maxRequestLength= 0 - 2097151 in webconfig,
IMO you need to use the right tool for the job. Http was simply not intended to transfer large files like this. Why dont you use ftp instead, and maybe you could then build a web interface around that.
The error shown to you suggests the server has not enough memory to process the file in memory. Validate if your server has enough memory to allocate such a big array/file.
You could also try to process chuncks of the stream. The fact that you get an out of memory suggests that the file is sent to the server, but the server cannot process the file.
I really think it has to do with the size of the array you allocate. It just won't fit in the memory of you machine (of in the memory assigned to .NET).
The error says that you run out of memory while trying to allocate a 1GB byte array in memory. This is not related to MVC. You should also note that the memory limit for 32bit processes is 2GB. If your server runs a 32bit OS and you allocate 1GB of that for a single upload you will quickly deplete the available memory.
Instead of trying to read the entire stream in memory, use Stream.Read to read the data in chuncks using a reasonably sized buffer and store the chuncks to a file stream with a Write call. Not only will you avoid OutOfMemoryExceptions, your code will also run much faster, because you won't have to wait to load the entire 1GB before storing it to a file.
The code can be as simple as this:
public static void SaveStream(Stream st,string targetFile)
{
byte[] inBuffer = new byte[10000];
using(FileStream outStream=File.Create(targetFile,20000))
using (BinaryWriter wr = new BinaryWriter(outStream))
{
st.Read(inBuffer, 0, inBuffer.Length);
wr.Write(inBuffer);
}
}
You can tweak the buffer sizes to balance throughput (how quickly you upload and save) vs scalability (how many clients you can handle).
I remember on an older project we had to work out a way to allow the user to upload a 2-4gb file for our ASP.NET web application.
If I recall correctly, we used the 'File Upload Control' and edited the web.config to allow for a greater file size:
<system.web>
<httpRuntime executionTimeout="2400" maxRequestLength="40960" />
</system.web>
Another option would be to use the HttpModule:
http://dotnetslackers.com/Community/blogs/haissam/archive/2008/09/12/upload-large-files-in-asp-net-using-httpmodule.aspx
I would suggest using FTP or write a small desktop application.
HTTP was never intended to send such large files.
Here's a Microsoft Knowledge base answer for you

ASP.Net: Writing chunks of file..HTTP File upload with resume

Please refer to question: Resume in upload file control
Now to find a solution for the above question, I want to work on it and develop a user control which can resume a HTTP File Upload process.
For this, I'm going to create a temporary file on server until the upload is complete. Once the uploading is done completely then will just save it to the desired location.
The procedure I have thought of is:
Create a temporary file with a unique name (may be GUID) on the server.
Read a chunk of file and append it to this temp file on the server.
Continue step 1 to 3 until EOF is reached.
Now if the connection is lost or user clicks on pause button, the writing of file is stopped and the temporary file is not deleted.
Clicking on resume again or uploading the same file again will check on the server if the file exists and user whether to resume or to overwrite. (Not sure how to check if it's the same file. Also, how to step sending the chunks from client to server.)
Clicking on resume will start from where it is required to be uploaded and will append that to the file on the server. (Again not sure how to do this.)
Questions:
Are these steps correct to achieve the goal? Or need some modifications?
I'm not sure how to implement all these steps. :-( Need ideas, links...
Any help appreciated...
What you are trying is not easy, but it is doable. Follow these guidelines:
Write 2 functions on the client side using ajax:
getUploadedPortionFromServer(filename) - This will ask the server if the file exists, and it should get an xml response from the server with the following info:
boolean(exist/not exist), size(bytes), md5(string)
The function will also run an md5 on the same size it got from the server on the local file,
If the md5 is the same, you can continue sending from the point it was stopped,
elseif not the same or size == 0, start over.
uploadFromBytes(x) - Will depend on the 1st function.
On the server you have to write matching functions, which will check the needed stuff, and send the response via XML to the client.
In order to have distinct filenames, you should use some sort of user tagging.
Are users logged in to your server? if so, use a hash of thier username appended to the filename, this will help you distinguish between the files.
If not, use a cookie.

Categories

Resources