So I currently have 3 sets of small files being generated (think about 1-2 per second per set), which I'd like to upload to a server in a timely manner. The files aren't large, but due to the number of them, it seems to clog up the server. Here's my current implementation:
System.Net.Http.HttpRequestMessage request = new System.Net.Http.HttpRequestMessage(System.Net.Http.HttpMethod.Post, _serverUri);
MultipartFormDataContent form = new MultipartFormDataContent();
request.Content = form;
HttpContent content1 = new ByteArrayContent(filePart1);
content1.Headers.ContentDisposition = new ContentDispositionHeaderValue("filePart1");
content1.Headers.ContentDisposition.FileName = "filePart1";
form.Add(content1);
HttpContent content2 = new ByteArrayContent(filePart2);
content2.Headers.ContentDisposition = new ContentDispositionHeaderValue("filePart2");
content2.Headers.ContentDisposition.FileName = "filePart2";
form.Add(content2);
_httpClient.SendAsync(request).ContinueWith((response) =>
{
ProcessResponse(response.Result);
});
Where filePart1 & filePart2 represent a file from one of the data sets, so there's 3 of these, one for each set of files running concurrently.
What I'd like to know, is if there's a better way to accomplish this, since the current method seems to clog up the server with its bombardment of files. Would it be possible to somehow open a stream for each set of files and send each file as a chunk somehow? Should I wait for the server to respond before sending the next file?
Related
After creating an upload session, I'm uploading the content of a large file by sending chunk requests. However, sending a request one at a time takes a relatively long time to upload the entire content of the file.
Is it possible to send multiple chunk requests at the same time by using multithreading?
I tried using Parallel.ForEach, but it doesn't work.
int maxSizeChunk = 320 * 1024 * 4;
ChunkedUploadProvider uploadProvider = new ChunkedUploadProvider(uploadSession, client, ms, maxSizeChunk);
IEnumerable<UploadChunkRequest> chunkRequests = uploadProvider.GetUploadChunkRequests();
List<Exception> exceptions = new List<Exception>();
byte[] readBuffer = new byte[maxSizeChunk];
// How to send multiple requests at once?
foreach (UploadChunkRequest request in chunkRequests)
{
UploadChunkResult result = await uploadProvider.GetChunkRequestResponseAsync(request, readBuffer, exceptions);
if (result.UploadSucceeded)
uploadedFile = result.ItemResponse;
}
I don't think you can upload via multiple streams at the same time.
Per documentation:
The fragments of the file must be uploaded sequentially in order. Uploading fragments out of order will result in an error.
I'm investigating a possible memory leak problem on a project where the user uploads files. The files are usually .zip or .exe zipped files that are used for other software. The average size of the files is 80MB
There is a MVC app that has the interface to upload the files (View). This view sends a POST request to an action within a controller. This controller action gets the file using the MultipartFormDataContent similiar to this: Sending binary data along with a REST API request and this: WEB API FILE UPLOAD, SINGLE OR MULTIPLE FILES
Inside the action, I get the file and convert it to a byte array. After converting, I send a post request to my API with the byte[] array.
Here is the MVC APP code that does that:
[HttpPost]
public async Task<ActionResult> Create(ReaderCreateViewModel model)
{
HttpPostedFileBase file = Request.Files["Upload"];
string fileName = file.FileName;
using (var client = new HttpClient())
{
using (var content = new MultipartFormDataContent())
{
using (var binaryReader = new BinaryReader(file.InputStream))
{
model.File = binaryReader.ReadBytes(file.ContentLength);
}
var fileContent = new ByteArrayContent(model.File);
fileContent.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = file.FileName
};
content.Add(fileContent);
var requestUri = "http://localhost:52970/api/upload";
HttpResponseMessage response = client.PostAsync(requestUri, content).Result;
if (response.IsSuccessStatusCode)
{
return RedirectToAction("Index");
}
}
}
return View("Index", model);
}
After investigating using several memory tools such as this: Best Practices No. 5: Detecting .NET application memory leaks
I have discovered that after converting the file to a byte array at this line:
using (var binaryReader = new BinaryReader(file.InputStream))
{
model.File = binaryReader.ReadBytes(file.ContentLength);
}
The memory usage increases from 70MB + or - to 175mb + or - and even after sending and finishing the request, the memory is never deallocated. If I keep uploading files, the memory just keep increasing until the server is completly down.
We can't send the files directly from a multi-part form to the API because we need to send and validate some data before (business requirements/rules). After researching, I've came with this approach but the memory leak problem is concerning me.
Am I missing something? Should the garbage collector collect the memory right away? In all disposable objects I'm using the "using" syntax but it doesn't help.
I'm also curious about this approach to upload the files. Should I be doing in a different way?
Just for clarification, the API is separated from the MVC application(each one is hosted on a separated web site in IIS), and it is all in C#.
1. Should the garbage collector collect the memory right away?
The garbage collector does not release the memory immediately because it is a time-consuming operation. When a garbage collection occurs, all your application's managed threads are paused. This introduces unwanted latency. So, the garbage collector only acts occasionally, based on a sophisticated algorithm.
2. In all disposable objects I'm using the "using" syntax but it doesn't help.
The using statement deals with unmanaged resources which are in limited supply
(usually IO-related, like file handles, database and network connections). Thus, this statement does not affect garbage collection.
3. Am I missing something?
It looks like you do not need the original byte array after you have wrapped it with ByteArrayContent. You do not clean up model.File after wrapping it, and the array can end up being passed to the Index view.
I would replace:
using(var binaryReader = new BinaryReader(file.InputStream)) {
model.File = binaryReader.ReadBytes(file.ContentLength);
}
var fileContent = new ByteArrayContent(model.File);
with:
ByteArrayContent fileContent = null;
using(var binaryReader = new BinaryReader(file.InputStream)) {
fileContent = new ByteArrayContent(binaryReader.ReadBytes(file.ContentLength));
}
to avoid the need to clean up model.File explicitely.
4. If I keep uploading files, the memory just keeps increasing until the server is completely down.
If your files are 80MB on average, they end up on the large object heap. The heap is not compacted automatically and usually is not garbage collected. It looks like in your case, the large object heap grows indefinitely (which can happen).
Provided you are using (or can upgrade to) .NET 4.5.1 or newer, you can force the large object heap to be compacted by setting:
System.Runtime.GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
You will need to invoke this line of code each time you want to schedule a large object heap compaction at the next full garbage collection.
You can also force an immediate compaction by calling:
System.Runtime.GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
System.GC.Collect();
However, if you need to free a lot of memory, this will be a costly operation in terms of time.
I'm having tough time trying to send large files with HTTP file upload in ASP.NET.
The target is to transfer contents larger than 2GB, compressed to save the bandwidth. Sending 3GB uncompressed works well, all the files are received and saved correctly on the disk. Yet, when I use compression (either gzip or deflate), I get the following error from the receiving API:
Unexpected end of MIME multipart stream. MIME multipart message is not complete.
The thing is that only when sending large requests (approx. 300MB is the upper limit), I get the exception. Uploading 200MB compressed works well.
Here's the upload code:
using (var client = new HttpClient())
{
using (var content = new MultipartFormDataContent())
{
client.BaseAddress = new Uri(fileServerUrl);
client.DefaultRequestHeaders.TransferEncodingChunked = true;
CompressedContent compressedContent = new CompressedContent(content, "gzip");
var request = new HttpRequestMessage(HttpMethod.Post, "/api/fileupload/")
{
Content = compressedContent
};
var uploadResponse = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead).Result;
}
}
The CompressedContent class is a class from WebApiContrib set of helpers
Here's the receiving end
// CustomStreamProvider is an implementation to store the files uploaded on the disk
var streamProvider = new CustomStreamProvider(uploadPath);
var res = await Request.Content.ReadAsMultipartAsync(streamProvider);
streamProvider.FileData.Select(file => new FileInfo(file.LocalFileName)).Select(fi => fi.FullName)
Can you provide me with a clue what the problem is? Why is it that larger contents (larger than 300MB) appear to be compressed improperly while smaller are transferred just fine?
Are you compressing the file in chunks? If you are able to send uncompressed files, then clearly, the problem is your compression routine. Using gzip or Deflate out of the box has limitation on file size. Therefore you need to compress in chunks.
Tip:
Debug your compression routine. Try to compress your file and save it to the HDD and see if it is readable.
Take a look at this article
Check what is content-length, HTTP server side routine does not expect you to send different content-length. You are probably sending content-length of original file size, where else you are sending compressed content and MultiPart parser is expecting data equal to content-length.
Instead, you send a zipped file and extract it on server side and let both zipping and unzipping be different at application level instead of plugging inside http protocol.
You will have to make sure, order of compression/decompression and content-length are correct on both side. As far as I am aware, compression on upload is not very common on HTTP protocol.
Try to add a name to your input as below:
<input type="file" id="fileInput" name="fileInput"/>
Or use a custom stream to append the newline that ASP.NET web api is expecting:
Stream reqStream = Request.Content.ReadAsStreamAsync().Result;
MemoryStream tempStream = new MemoryStream();
reqStream.CopyTo(tempStream);
tempStream.Seek(0, SeekOrigin.End);
StreamWriter writer = new StreamWriter(tempStream);
writer.WriteLine();
writer.Flush();
tempStream.Position = 0;
StreamContent streamContent = new StreamContent(tempStream);
foreach(var header in Request.Content.Headers)
{
streamContent.Headers.Add(header.Key, header.Value);
}
// Read the form data and return an async task.
await streamContent.ReadAsMultipartAsync(provider);
I have a file that I am attempting to return to a web client using HttpResponseMessage. The code works, but the transfer speed is five to ten times slower than simply fetching the same file from an IIS virtual directory. I have verified that it's not an encoding issue by monitoring my download bandwidth consumption, which never breaks 250 kilobytes per second, where the direct download from IIS is typically five times that.
Here's the code, stripped to its essentials and with error trapping removed for clarity:
// Succeeded in getting the stream opened, so return with HTTP 200 status to the client.
var stream = new FileStream(uncPath, FileMode.Open,FileAccess.Read, FileShare.Read);
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
result.Content = new StreamContent(stream);
result.Content.Headers.ContentType = new MediaTypeHeaderValue(MimeExtensionHelper.GetMimeType(uncPath));
return result;
Am I missing something?
I am using https://github.com/rackspace/csharp-cloudfiles to bulid a command-line tool to upload files to Rackspace Cloud Files.
The thing is that I don't know how to track upload progress (there doesn't seem to be any kind of event or something).
Here's the code:
// Create credentials, client and connection
var creds = new UserCredentials(username, apiKey);
CF_Client client = new CF_Client();
Connection conn = new CF_Connection(creds, client);
conn.Authenticate();
// Get container and upload file
var container = new CF_Container(conn, client, containerName);
var obj = new CF_Object(conn, container, client, remoteFileName);
obj.WriteFromFile(localFilePath);
There doesn't look like there's one built-in, no, but you could probably add your own.
An alternative would be to measure the input; if you look at the source you'll see that WriteFromFile is effectively just
Dictionary<string,string> headers = new Dictionary<string,string>();
using(Stream stream = System.IO.File.OpenRead(localFilePath))
{
obj.Write(stream, headers);
}
so you could wrap the stream you pass to Write in another stream class that measures total-bytes-read progress (there's a few around if you search, or it'd be easy enough to write yourself). If you did want to add progress notifications back from their code you'd need to add it to the wrapped OpenStack Client object but that shouldn't be too hard either.