I have a file that I am attempting to return to a web client using HttpResponseMessage. The code works, but the transfer speed is five to ten times slower than simply fetching the same file from an IIS virtual directory. I have verified that it's not an encoding issue by monitoring my download bandwidth consumption, which never breaks 250 kilobytes per second, where the direct download from IIS is typically five times that.
Here's the code, stripped to its essentials and with error trapping removed for clarity:
// Succeeded in getting the stream opened, so return with HTTP 200 status to the client.
var stream = new FileStream(uncPath, FileMode.Open,FileAccess.Read, FileShare.Read);
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
result.Content = new StreamContent(stream);
result.Content.Headers.ContentType = new MediaTypeHeaderValue(MimeExtensionHelper.GetMimeType(uncPath));
return result;
Am I missing something?
Related
After creating an upload session, I'm uploading the content of a large file by sending chunk requests. However, sending a request one at a time takes a relatively long time to upload the entire content of the file.
Is it possible to send multiple chunk requests at the same time by using multithreading?
I tried using Parallel.ForEach, but it doesn't work.
int maxSizeChunk = 320 * 1024 * 4;
ChunkedUploadProvider uploadProvider = new ChunkedUploadProvider(uploadSession, client, ms, maxSizeChunk);
IEnumerable<UploadChunkRequest> chunkRequests = uploadProvider.GetUploadChunkRequests();
List<Exception> exceptions = new List<Exception>();
byte[] readBuffer = new byte[maxSizeChunk];
// How to send multiple requests at once?
foreach (UploadChunkRequest request in chunkRequests)
{
UploadChunkResult result = await uploadProvider.GetChunkRequestResponseAsync(request, readBuffer, exceptions);
if (result.UploadSucceeded)
uploadedFile = result.ItemResponse;
}
I don't think you can upload via multiple streams at the same time.
Per documentation:
The fragments of the file must be uploaded sequentially in order. Uploading fragments out of order will result in an error.
I am trying to send a continuous stream, from a C# application, to an ASP Core REST API.
I define a continuous stream as for example someone talking into a microphone and the sound being sent directly, without being saved to a local file) to the Rest API to be saved to file.
I have been searching a lot on Google for something like that and so far could not find anything really useful.
I have been trying to emulate it by sending a large file (297MB).
This is what I have so far for the client side:
string TARGETURL = "http://localhost:58000/api/file/";
string filePath = #"G:\Voice\Samples\The Monkey's Paw.wav";
byte[] fileContent = File.ReadAllBytes(filePath);
var dummyStream = new MemoryStream(fileContent);
var inputData = new StreamContent(dummyStream);
HttpResponseMessage response = this._httpClient.PostAsync(TARGETURL, inputData).Result;
HttpContent result = response.Content;
if (response.IsSuccessStatusCode)
{
string contents = result.ReadAsStringAsync().Result;
}
else
{
// do something
}
And for the server side:
[Route("")]
[HttpPost]
public async Task<JsonResult> Post()
{
Dictionary<string, object> rv = new Dictionary<string, object>();
try
{
string file = Path.Combine(#"G:\Voice\Samples\dummy.txt");
using (FileStream fs = new FileStream(file, FileMode.Create, FileAccess.Write,
FileShare.None, 4096, useAsync: true))
{
await Request.Body.CopyToAsync(fs);
}
// complete the transaction
rv.Add("success", true);
rv.Add("error", "");
}
catch(Exception ex)
{
}
return Json(rv);
}
When I am sending the file, the server throw the following exception:
The request's Content-Length 304137380 is larger than the request body size limit 30000000.
I know that I could increase the body size limit, but that's not a longer term solution as the stream length could get longer that any limit I set.
That's why I am trying to find a solution that send the stream by chunks for the server to rebuild and write to a file.
What you probably want to do is use a different network stack. A web application is always going to try and fit everything into HTTP. This is a very specific kind of way to communicate. And REST is built on top of these ideas as well. Things are generally though of as a document with references on the Internet, and REST is an extension to this idea.
It does however sit on top of some other great technologies that might suit your need better.
There's nothing to stop you using the internet, but maybe you need to look at possibly a UDP or TCP level implementation. Be aware that you will still be sending information in packets. There is no such thing as a constant stream of bits on the internet. A sound wave in the real world is an infinite thing, but computers are rubbish at that.
Maybe start by taking a look at using sockets and a library like NAudio.
So I currently have 3 sets of small files being generated (think about 1-2 per second per set), which I'd like to upload to a server in a timely manner. The files aren't large, but due to the number of them, it seems to clog up the server. Here's my current implementation:
System.Net.Http.HttpRequestMessage request = new System.Net.Http.HttpRequestMessage(System.Net.Http.HttpMethod.Post, _serverUri);
MultipartFormDataContent form = new MultipartFormDataContent();
request.Content = form;
HttpContent content1 = new ByteArrayContent(filePart1);
content1.Headers.ContentDisposition = new ContentDispositionHeaderValue("filePart1");
content1.Headers.ContentDisposition.FileName = "filePart1";
form.Add(content1);
HttpContent content2 = new ByteArrayContent(filePart2);
content2.Headers.ContentDisposition = new ContentDispositionHeaderValue("filePart2");
content2.Headers.ContentDisposition.FileName = "filePart2";
form.Add(content2);
_httpClient.SendAsync(request).ContinueWith((response) =>
{
ProcessResponse(response.Result);
});
Where filePart1 & filePart2 represent a file from one of the data sets, so there's 3 of these, one for each set of files running concurrently.
What I'd like to know, is if there's a better way to accomplish this, since the current method seems to clog up the server with its bombardment of files. Would it be possible to somehow open a stream for each set of files and send each file as a chunk somehow? Should I wait for the server to respond before sending the next file?
I'm having tough time trying to send large files with HTTP file upload in ASP.NET.
The target is to transfer contents larger than 2GB, compressed to save the bandwidth. Sending 3GB uncompressed works well, all the files are received and saved correctly on the disk. Yet, when I use compression (either gzip or deflate), I get the following error from the receiving API:
Unexpected end of MIME multipart stream. MIME multipart message is not complete.
The thing is that only when sending large requests (approx. 300MB is the upper limit), I get the exception. Uploading 200MB compressed works well.
Here's the upload code:
using (var client = new HttpClient())
{
using (var content = new MultipartFormDataContent())
{
client.BaseAddress = new Uri(fileServerUrl);
client.DefaultRequestHeaders.TransferEncodingChunked = true;
CompressedContent compressedContent = new CompressedContent(content, "gzip");
var request = new HttpRequestMessage(HttpMethod.Post, "/api/fileupload/")
{
Content = compressedContent
};
var uploadResponse = client.SendAsync(request, HttpCompletionOption.ResponseHeadersRead).Result;
}
}
The CompressedContent class is a class from WebApiContrib set of helpers
Here's the receiving end
// CustomStreamProvider is an implementation to store the files uploaded on the disk
var streamProvider = new CustomStreamProvider(uploadPath);
var res = await Request.Content.ReadAsMultipartAsync(streamProvider);
streamProvider.FileData.Select(file => new FileInfo(file.LocalFileName)).Select(fi => fi.FullName)
Can you provide me with a clue what the problem is? Why is it that larger contents (larger than 300MB) appear to be compressed improperly while smaller are transferred just fine?
Are you compressing the file in chunks? If you are able to send uncompressed files, then clearly, the problem is your compression routine. Using gzip or Deflate out of the box has limitation on file size. Therefore you need to compress in chunks.
Tip:
Debug your compression routine. Try to compress your file and save it to the HDD and see if it is readable.
Take a look at this article
Check what is content-length, HTTP server side routine does not expect you to send different content-length. You are probably sending content-length of original file size, where else you are sending compressed content and MultiPart parser is expecting data equal to content-length.
Instead, you send a zipped file and extract it on server side and let both zipping and unzipping be different at application level instead of plugging inside http protocol.
You will have to make sure, order of compression/decompression and content-length are correct on both side. As far as I am aware, compression on upload is not very common on HTTP protocol.
Try to add a name to your input as below:
<input type="file" id="fileInput" name="fileInput"/>
Or use a custom stream to append the newline that ASP.NET web api is expecting:
Stream reqStream = Request.Content.ReadAsStreamAsync().Result;
MemoryStream tempStream = new MemoryStream();
reqStream.CopyTo(tempStream);
tempStream.Seek(0, SeekOrigin.End);
StreamWriter writer = new StreamWriter(tempStream);
writer.WriteLine();
writer.Flush();
tempStream.Position = 0;
StreamContent streamContent = new StreamContent(tempStream);
foreach(var header in Request.Content.Headers)
{
streamContent.Headers.Add(header.Key, header.Value);
}
// Read the form data and return an async task.
await streamContent.ReadAsMultipartAsync(provider);
EDIT: Filezilla caused the problem, when i download files back from server it added new lines. I'm sorry for confusion.
This method upload files to ftp server and it's work fine, but in text files uploaded to server blank lines appear after every line("cr lf" appear), for example:
File:
First line
Second line
Third line
Uploaded file:
First line
Second line
Third line
Origin and uploaded files accordingly have different sizes, non-text files are the same.
Code:
private void sendFile(string In, string Out)
{
FtpWebRequest request = (FtpWebRequest) WebRequest.Create("ftp://domain//" + Out);
request.Method = WebRequestMethods.Ftp.UploadFile;
request.Credentials = new NetworkCredential("username", "password");
FileStream sourceStream = new FileStream(In, FileMode.Open, FileAccess.Read, FileShare.Read);
byte[] fileContents = new byte[sourceStream.Length];
sourceStream.Read(fileContents, 0, (int) sourceStream.Length);
sorceStream.Close();
request.ContentLength = fileContents.Length;
Stream requestStream = request.GetRequestStream();
requestStream.Write(fileContents, 0, fileContents.Length);
requestStream.Close();
}
How can i fix this?
EDIT: As the answer below doesn't seem to have helped (but I'm leaving it there for posterity as it shows better code) here are the next diagnostics steps I'd check:
How are you viewing the files? If at all possible, get onto the server directly rather than fetching the files again via a web browser or whatever.
What's the type of FTP server you're connecting to? Maybe there's a known issue.
Have you tried looking at what's actually being sent via Wireshark?
Have you tried sending the same files via a normal FTP client?
You should set FtpWebRequest.UseBinary to true in order to preserve the exact file contents. Otherwise the two systems will try to figure out line endings themselves, changing line terminators as they see fit. I very rarely think that's a good idea. (EDIT: UseBinary is actually true by default, but this sounds like the kind of problem introduced by using text mode... it certainly does no harm to make this explicit.)
Additionally:
You should be disposing of your FileStream via a using statement
You should be disposing of the request stream via a using statement
You should be taking note of the result of Stream.Read - it needn't always read the whole of the requested data in one go
You can either use File.ReadAllBytes to simply read the complete file data in one go, or use Stream.CopyTo (if you're using .NET 4) to copy the FileStream to the request stream (which won't set the content length, of course; I don't know whether this is a problem)
You're never calling GetResponse; it's unclear exactly what happens if you never fetch the response of an FtpWebRequest
Your parameter names don't match .NET naming conventions, and aren't very descriptive
So I would probably use:
private void SendFile(string inputFile, string outputPath)
{
FtpWebRequest request = (FtpWebRequest) WebRequest.Create
("ftp://domain//" + outputPath);
request.Method = WebRequestMethods.Ftp.UploadFile;
request.UseBinary = true;
request.Credentials = new NetworkCredential("username", "password");
byte[] fileContents = File.ReadAllBytes(inputFile);
request.ContentLength = fileContents.Length;
using (Stream requestStream = request.GetRequestStream())
{
requestStream.Write(fileContents, 0, fileContents.Length);
}
// This *may* be necessary in order to validate that everything has happened
using (WebResponse response = request.GetResponse())
{
}
}
Its strange. I face the same problem and I was unable to fix it until I did not provide an extension in file. For Example if my file name was
abcfile
then I make it abcfile.dat and after that it shows me the uploaded file as actual file. I again upload file with abcfile.txt but this time again empty line problem appear in my uploaded file.
I suggest that you must provide extension to your file any except .txt.
The system that you're sending to uses different line endings to what your system uses. I can assume, because you get an extra line, that you're on Windows, and it uses CRLF endings. The system you're sending to recognises CR and LF as separate endings, so you get the extra lines.
For text, truncate the LF or the CR, see what happens. I have no clue about the differing file sizes.
In the top menu of FileZilla, set:
Transfer menu > Transfer type > binary
In the top menu of FileZilla, set:
Transfer menu > Transfer type > binary
It's working for me.