How to develop web service (ASMX) for uploading file with status - c#

uploading file to web server with the help of web service is easy. this is the way i do it generally. here is my sample code.
[WebMethod]
public bool UploadFile(string FileName, byte[] buffer, long Offset)
{
bool retVal = false;
try
{
// setting the file location to be saved in the server.
// reading from the web.config file
string FilePath =
Path.Combine(ConfigurationManager.AppSettings["upload_path"], FileName);
if (Offset == 0) // new file, create an empty file
File.Create(FilePath).Close();
// open a file stream and write the buffer.
// Don't open with FileMode.Append because the transfer may wish to
// start a different point
using (FileStream fs = new FileStream(FilePath, FileMode.Open,
FileAccess.ReadWrite, FileShare.Read))
{
fs.Seek(Offset, SeekOrigin.Begin);
fs.Write(buffer, 0, buffer.Length);
}
retVal = true;
}
catch (Exception ex)
{
//sending error to an email id
common.SendError(ex);
}
return retVal;
}
but i want to develop web service which will give me status for uploading file in percentage and when file upload will be completed then a event will be fired at client side with status message whether file is uploaded completely or not. also i need to write routine which can handle multiple request simultaneously and also routine must be thread safe. so please guide me how to design routine which will suffice all my require points. thanks

I'd highly recommend that you forget about implementing this from scratch and instead look into one of the existing client file upload solutions that are available - most come with some boilerplate .NET code you can plug into an existing application.
I've used jQuery Upload and plUpload both of which have solid client side upload managers that upload files via HTTP Range headers and provide upload status information in the process. I believe both come with .NET examples.
Implementation of the server side for these types of upload handlers involve receiving HTTP chunks of data, that are identified via a sort of upload session id. The client sends chunks of files, each identified by this file related id as well as some progress information like bytes transferred and total file size and a status value that inidicates the status of the request. The status lets you know when the file is completely uploaded.
The incoming data from the POST buffer can then be written to a file or into a database or some other storage mechanism based on the unique ID passed from the client. Because the client application is sending chunks of data to the server it can provide progress information. If you use a client library like plUpload or jQuery-Upload they'll provide the customizable UI.

Related

How do I pass a Stream from a Web API to Azure Blob Storage without temp files?

I am working on an application where file uploads happen often, and can be pretty large in size.
Those files are being uploaded to a Web API, which will then get the Stream from the request, and pass it on to my storage service, that then uploads it to Azure Blob Storage.
I need to make sure that:
No temp files are written on the Web API instance
The request stream is not fully read into memory before passing it on to the storage service (to prevent OutOfMemoryExceptions).
I've looked at this article, which describes how to disable input stream buffering, but because many file uploads from many different users happen simultaneously, it's important that it actually does what it says on the tin.
This is what I have in my controller at the moment:
if (this.Request.Content.IsMimeMultipartContent())
{
var provider = new MultipartMemoryStreamProvider();
await this.Request.Content.ReadAsMultipartAsync(provider);
var fileContent = provider.Contents.SingleOrDefault();
if (fileContent == null)
{
throw new ArgumentException("No filename.");
}
var fileName = fileContent.Headers.ContentDisposition.FileName.Replace("\"", string.Empty);
// I need to make sure this stream is ready to be processed by
// the Azure client lib, but not buffered fully, to prevent OoM.
var stream = await fileContent.ReadAsStreamAsync();
}
I don't know how I can reliably test this.
EDIT: I forgot to mention that uploading directly to Blob Storage (circumventing my API) won't work, as I am doing some size checking (e.g. can this user upload 500mb? Has this user used his quota?).
Solved it, with the help of this Gist.
Here's how I am using it, along with a clever "hack" to get the actual file size, without copying the file into memory first. Oh, and it's twice as fast
(obviously).
// Create an instance of our provider.
// See https://gist.github.com/JamesRandall/11088079#file-blobstoragemultipartstreamprovider-cs for implementation.
var provider = new BlobStorageMultipartStreamProvider ();
// This is where the uploading is happening, by writing to the Azure stream
// as the file stream from the request is being read, leaving almost no memory footprint.
await this.Request.Content.ReadAsMultipartAsync(provider);
// We want to know the exact size of the file, but this info is not available to us before
// we've uploaded everything - which has just happened.
// We get the stream from the content (and that stream is the same instance we wrote to).
var stream = await provider.Contents.First().ReadAsStreamAsync();
// Problem: If you try to use stream.Length, you'll get an exception, because BlobWriteStream
// does not support it.
// But this is where we get fancy.
// Position == size, because the file has just been written to it, leaving the
// position at the end of the file.
var sizeInBytes = stream.Position;
Voilá, you got your uploaded file's size, without having to copy the file into your web instance's memory.
As for getting the file length before the file is uploaded, that's not as easy, and I had to resort to some rather non-pleasant methods in order to get just an approximation.
In the BlobStorageMultipartStreamProvider:
var approxSize = parent.Headers.ContentLength.Value - parent.Headers.ToString().Length;
This gives me a pretty close file size, off by a few hundred bytes (depends on the HTTP header I guess). This is good enough for me, as my quota enforcement can accept a few bytes being shaved off.
Just for showing off, here's the memory footprint, reported by the insanely accurate and advanced Performance Tab in Task Manager.
Before - using MemoryStream, reading it into memory before uploading
After - writing directly to Blob Storage
I think a better approach is for you to go directly to Azure Blob Storage from your client. By leveraging the CORS support in Azure Storage you eliminate load on your Web API server resulting in better overall scale for your application.
Basically, you will create a Shared Access Signature (SAS) URL that your client can use to upload the file directly to Azure storage. For security reasons, it is recommended that you limit the time period for which the SAS is valid. Best practices guidance for generating the SAS URL is available here.
For your specific scenario check out this blog from the Azure Storage team where they discuss using CORS and SAS for this exact scenario. There is also a sample application so this should give you everything you need.

Uploading a file through ASP MVC asyncronously?

Is it possible to upload a file asynchronously to a website using ASP MVC 4?
In my current solution, when the user uploads a file, the post-back checks the number of bytes in the file stream - if it's greater than 8024 bytes, we put an error in TempData and return back to the same page indicating that a file is too large. If it is under 8024, then I create a new Task that will funnel bytes from the FileStream into a new destination file, somewhere on the server.
However, my boss correctly pointed out that at that point it may be too late to warn the user that their file is too large.
We were wondering whether it was possible to check the length of the file on the server side before reading any bytes from the user (and thus without the user having to wait for the file to fully upload before finding out that their file is too large - imagine trying to upload an 8125kb file on a 3MBit connection only to find out that actually the file you gave was 1 byte too large, sorry, try again.
The reason I asked if we could upload a file asynchronously is because this seems very much like asynchronous streaming to me - immediately give a response, regardless of the actual progress of the upload.
We're aware you can use IIS (through the httpHandler attribute in the configuration) to prevent requests over X bytes, but we want to handle error handling ourselves.

wait for Download complete

I need to download a zip file created in realtime from a webservice.
Let me explain.
I am developing a web application that uses a SoapXml webservice. There is the Export function in the webservice that returns a temporary url to download the file. Upon request to download, the server creates the file and makes it available for download after a few seconds.
I'm trying to use
webClient.DownloadFile(url, #"c:/etc../")
This function downloads the file and saves it to me 0kb. is too fast! The server does not have time to create the file. I also tried to put
webClient.OpenRead(url);
System.Threading.Thread.Sleep(7000);
webClient.DownloadFile(url, #"c:/etc../");
but does not work.
In debug mode if I put a BREAK POINT on webClient.DownloadFile and I start again after 3, 4 seconds, the server have the time to create the file and I have a full download.
The developers of the webservice suggested me to use "polling" on the url until the file gets ready for the download. how does it work?
How can I do to solve my problem? (I also tried to DownloadFile Asynchronous mode )
I have similar mechanism in my application, and it works perfectly. WebClient does request, and waits, because server is creating response(file). If WebClient downloads 0kb that means that server responded to request with some empty response. This may not be a bug, but a design. If creating file takes long time, this method could result in timeouts. On the other hand if creating file takes short time, server side should respond with file(making WebClient hang on request, till the file is ready). I would try to discuss this matter with other developers and maybe redesign "file generator".
EDIT: Pooling means making requests in loop, for example every 2 seconds. I'm using DownloadData because it's useless, and resource consuming, to save empty file every time, which DownloadFile does.
public void PoolAndDownloadFile(Uri uri, string filePath)
{
WebClient webClient = new WebClient();
byte[] downloadedBytes = webClient.DownloadData(uri);
while (downloadedBytes.Length == 0)
{
Thread.Sleep(2000);
downloadedBytes = webClient.DownloadData(uri);
}
Stream file = File.Open(filePath, FileMode.Create);
file.Write(downloadedBytes, 0, downloadedBytes.Length);
file.Close();
}

How can i upload file size 1.25GB in ASP.NET MVC

My client is uploading file more then 1 GB through application. I know i can only upload only 100mb using asp.net MVC application.
public static byte[] ReadStream(Stream st)
{
st.Position = 0;
byte[] data = new byte[st.Length];
.
.
.
.
}
i am getting error at byte[] data = new byte[st.Length]; because st.Length=1330768612
Error - "Exception of type 'System.OutOfMemoryException' was thrown."
Is there any way i can upload more then 1gb file?
Why we can define maxRequestLength= 0 - 2097151 in webconfig,
IMO you need to use the right tool for the job. Http was simply not intended to transfer large files like this. Why dont you use ftp instead, and maybe you could then build a web interface around that.
The error shown to you suggests the server has not enough memory to process the file in memory. Validate if your server has enough memory to allocate such a big array/file.
You could also try to process chuncks of the stream. The fact that you get an out of memory suggests that the file is sent to the server, but the server cannot process the file.
I really think it has to do with the size of the array you allocate. It just won't fit in the memory of you machine (of in the memory assigned to .NET).
The error says that you run out of memory while trying to allocate a 1GB byte array in memory. This is not related to MVC. You should also note that the memory limit for 32bit processes is 2GB. If your server runs a 32bit OS and you allocate 1GB of that for a single upload you will quickly deplete the available memory.
Instead of trying to read the entire stream in memory, use Stream.Read to read the data in chuncks using a reasonably sized buffer and store the chuncks to a file stream with a Write call. Not only will you avoid OutOfMemoryExceptions, your code will also run much faster, because you won't have to wait to load the entire 1GB before storing it to a file.
The code can be as simple as this:
public static void SaveStream(Stream st,string targetFile)
{
byte[] inBuffer = new byte[10000];
using(FileStream outStream=File.Create(targetFile,20000))
using (BinaryWriter wr = new BinaryWriter(outStream))
{
st.Read(inBuffer, 0, inBuffer.Length);
wr.Write(inBuffer);
}
}
You can tweak the buffer sizes to balance throughput (how quickly you upload and save) vs scalability (how many clients you can handle).
I remember on an older project we had to work out a way to allow the user to upload a 2-4gb file for our ASP.NET web application.
If I recall correctly, we used the 'File Upload Control' and edited the web.config to allow for a greater file size:
<system.web>
<httpRuntime executionTimeout="2400" maxRequestLength="40960" />
</system.web>
Another option would be to use the HttpModule:
http://dotnetslackers.com/Community/blogs/haissam/archive/2008/09/12/upload-large-files-in-asp-net-using-httpmodule.aspx
I would suggest using FTP or write a small desktop application.
HTTP was never intended to send such large files.
Here's a Microsoft Knowledge base answer for you

C# Response.WriteFile vs Response.TransmitFile filesize issues

I have a 5Mb pdf on the server dowloading this file using a writeFile gives me a 15Mb download, where as the transmitfile gives the correct 5Mb filesize...
Is this due to some sort of uncompression into memory on the server for the writeFile? Just wonder if anyone had seen the same thing happening...
(ps only noticed it since we went to iis7??)
code being...
if (File.Exists(filepath))
{
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.ContentType = "application/octet-stream";
HttpContext.Current.Response.AddHeader("content-disposition","attachment;filename=\""+Path.GetFileName(filepath)+"\"");
HttpContext.Current.Response.AddHeader("content-length", new FileInfo(filepath).Length.ToString());
//HttpContext.Current.Response.WriteFile(filepath);
HttpContext.Current.Response.TransmitFile(filepath);
HttpContext.Current.Response.Flush();
HttpContext.Current.Response.Close();
}
TransmitFile - Writes the specified file directly to an HTTP response output stream without buffering it in memory.
WriteFile - Writes the specified file directly to an HTTP response output stream.
I would say the difference occurs because Transmit file doesn't buffer it. Write file is using buffering (Afiak), basically temporarily holding the data before transmitting it, as such it cannot guess the accurate file size because its writing it in chunks.
You can understand by following definition.
Response.TransmitFile VS Response.WriteFile:
TransmitFile: This method sends the file to the client without loading it to the Application memory on the server. It is the ideal way to use it if the file size being download is large.
WriteFile: This method loads the file being download to the server's memory before sending it to the client. If the file size is large, you might the ASPNET worker process might get restarted.*
Reference :- TransmitFile VS WriteFile

Categories

Resources