I'm trying to upload a large video (1 GB+) from my xamarin app and it keeps crashing once it reaches about 0.5 GB of my file. The only way I've found to get the videos to post to my WCF service while sending data along with it is using the MultiPart logic but I'm not sure if I'm running out of memory or what because even in debug mode, it simply crashes without any real error message.
I'm trying to run it on a native device (not a sim) and it's a Samsung Galaxy S9 with Android 9.
Here's the upload code that I'm using: (p.s. - as a test, I tried putting the WriteAsync into a for loop thinking that maybe trying to write the whole gig was the problem, but the result was the same. That's why you'll see the MAXFILESIZEPART constant in there which is just an int equal to 10000000.)
private async Task<byte[]> GetMultipartFormDataAsync(Dictionary<string, object> postParameters, string boundary)
{
try
{
using (Stream formDataStream = new System.IO.MemoryStream())
{
bool needsCLRF = false;
foreach (var param in postParameters)
{
// Thanks to feedback from commenters, add a CRLF to allow multiple parameters to be added.
// Skip it on the first parameter, add it to subsequent parameters.
if (needsCLRF)
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes("\r\n"), 0, Encoding.UTF8.GetByteCount("\r\n"));
needsCLRF = true;
if (param.Value is FileParameter)
{
FileParameter fileToUpload = (FileParameter)param.Value;
// Add just the first part of this param, since we will write the file data directly to the Stream
string header = string.Format("--{0}\r\nContent-Disposition: form-data; name=\"{1}\"; filename=\"{2}\"\r\nContent-Type: {3}\r\n\r\n",
boundary,
param.Key,
fileToUpload.FileName ?? param.Key,
fileToUpload.ContentType ?? "application/octet-stream");
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes(header), 0, Encoding.UTF8.GetByteCount(header));
// Write the file data directly to the Stream, rather than serializing it to a string.
if (fileToUpload.File.Length > MAXFILESIZEPART)
{
for (var i = 0; i < fileToUpload.File.Length; i += MAXFILESIZEPART)
{
var len = i + MAXFILESIZEPART > fileToUpload.File.Length
? fileToUpload.File.Length - i
: MAXFILESIZEPART;
await formDataStream.WriteAsync(fileToUpload.File, i, len);
}
}
else
{
await formDataStream.WriteAsync(fileToUpload.File, 0, fileToUpload.File.Length);
}
}
else
{
string postData = string.Format("--{0}\r\nContent-Disposition: form-data; name=\"{1}\"\r\n\r\n{2}",
boundary,
param.Key,
param.Value);
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes(postData), 0, Encoding.UTF8.GetByteCount(postData));
}
}
// Add the end of the request. Start with a newline
string footer = "\r\n--" + boundary + "--\r\n";
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes(footer), 0, Encoding.UTF8.GetByteCount(footer));
// Dump the Stream into a byte[]
formDataStream.Position = 0;
byte[] formData = new byte[formDataStream.Length];
formDataStream.Read(formData, 0, formData.Length);
return formData;
}
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
And it's eventually failing on the following line
await formDataStream.WriteAsync(fileToUpload.File, i, len);
but only after a certain point (about 500MB) so I'm assuming it's a memory issue but it doesn't say so. Is there a better way to accomplish this task? I'm doing it so that it also records the progress as the upload happens. I'm trying to accomplish something similar to uploading large videos via the facebook app so that it will upload in the background while you continue working. It works great when working with smaller files (i.e. - < 500 MB) but this is the first time I've tried a file that was almost a gig in size.
NOTE: This happens BEFORE it starts posting anything to the server so it's not IIS or WCF related. This code crashes just writing the bytes to the memory stream.
Any suggestions?
Thanks!
According to your description, the service will stop at a certain time point, and because the file you transfer is about 1G, it is likely to be sendtimeout.No transfer completed within the specified time, causing exception。The SendTimeout that specifies how long the write operation has to complete before timing out. The default value is 1 minute.
I set sendtimeout to 15 seconds in my configuration file.If the data takes more than 15 seconds, an exception will occur. You can set it to a higher value to avoid timeout and exception.
For information about sendtimeout, please refer to the following link:
https://learn.microsoft.com/en-us/dotnet/api/system.servicemodel.channels.binding.sendtimeout?view=dotnet-plat-ext-3.1
UPDATE
I think it might be a memory overflow problem.Large file may cause memory overflow, unable to read at the same time.
You can refer to the following links for solutions
https://learn.microsoft.com/en-us/archive/blogs/johan/are-you-getting-outofmemoryexceptions-when-uploading-large-files
Related
I am integrating my Xamarin.Android app with Google Drive API. My main problem is that even though all API calls ends up with success, some of my binary files are not persisted in Google Drive.
I try to send 5-10 of audio files (wav, mp3, ogg, etc) sized 10-80 MB each.
Here's my approach of sending binary files:
string mimeType = GetMimeType(localFilePath);
string fullSourceFilePath = Path.Combine(documentsPath, localFilePath);
byte[] buffer = new byte[BinaryStreamsBufferLength];
using (var contentResults = await DriveClass.DriveApi.NewDriveContentsAsync(_googleApiClient).ConfigureAwait(false))
{
if (contentResults.Status.IsSuccess) //always returns true
{
using (var readStream = System.IO.File.OpenRead(fullSourceFilePath))
{
using (var fileChangeSet = new MetadataChangeSet.Builder()
.SetTitle(path[1])
.SetMimeType(mimeType)
.Build())
{
using (var writeStream = new BinaryWriter(contentResults.DriveContents.OutputStream))
{
int bytesRead = readStream.Read(buffer, 0, BinaryStreamsBufferLength);
while (bytesRead > 0)
{
writeStream.Write(buffer, 0, bytesRead);
bytesRead = readStream.Read(buffer, 0, BinaryStreamsBufferLength);
}
writeStream.Close();
}
readStream.Close();
var singleFileCopyResult = await targetCloudFolder.CreateFileAsync(_googleApiClient, fileChangeSet, contentResults.DriveContents,
new ExecutionOptions.Builder().SetConflictStrategy(ExecutionOptions.ConflictStrategyOverwriteRemote).Build()).ConfigureAwait(false);
return singleFileCopyResult.Status.IsSuccess; //always returns true
}
}
}
}
Above method is called in foreach loop with list of locally stored files that needs to be sent.
Also, when connecting to API for the first time, I requestSync:
if (_isSyncRequired)
syncResult = await DriveClass.DriveApi.RequestSyncAsync(_googleApiClient).ConfigureAwait(false);
I am already after few days of trying various solutions including:
- changing stream types
- changing order of API method calls
- using non-Async API methods (which in fact are also async)
- experimented with different overloads and parameters when possible
- tried to refactor to non-obsolete API calls (impossible to do as it requires ultra-heavy GoogleSignIn NuGet with tons of dependencies and my app is heavy enough)
None of above worked and my files are later successfully downloaded in like 80% of all files being sent (all upload statuses are success).
Does anyone of you experienced similar issues so far? Do you have any tips what can be done in order to make file downloadable if all upload methods claims everything succeeds?
Many thanks in advance for any clues :)
I am trying to stream mp4 (fragmented) video to a mobile browser client using Nancy MVC framework. Everything works fine. The code is enclosed.
The thing is, a video is going to generated simultaneously as being streamed, so the stream.Length will increase over period of time. Does someone knows what to do to support this scenario ?
(I have tried to commit the length in "content-range" header, giving an arbitrary max size to encompass whole video size, but no avail...)
/*called when /video is requested*/
Get["/video"] = _ =>
{
if (Request.Headers.Keys.Contains("Range"))
return Response.FromPartialStream(Request, File.OpenRead("../Page/video-fragmented.mp4"), "video/mp4");
else
/*from stream...*/
};
public static Response FromPartialStream(this IResponseFormatter f,
Request req, Stream stream,
string contentType)
{
const string BYTES_RANGE_HEADER = "Range";
if (req.Headers[BYTES_RANGE_HEADER].Count() != 1)
throw new NotSupportedException();
var rangeStr = req.Headers[BYTES_RANGE_HEADER].FirstOrDefault();
var range = rangeStr.Replace("bytes=", String.Empty)
.Split(new string[] { "-" }, StringSplitOptions.RemoveEmptyEntries)
.Select(x => Int32.Parse(x))
.ToArray();
var start = (range.Length > 0) ? range[0] : 0;
var end = (range.Length > 1) ? range[1] : (int)(stream.Length - 1);
var res = new PartialStreamResponse(stream, start, end, contentType)
.WithHeader("Connection", "keep-alive")
.WithHeader("Accept-Ranges", "bytes")
.WithHeader("Content-Range", "bytes " + start + "-" + end + "/" + stream.Length)
.WithHeader("Content-Length", (end - start + 1).ToString());
Console.WriteLine("Requested range: {0}", rangeStr);
return res;
}
public class PartialStreamResponse : Response
{
Stream sourceStream = null;
int start, end;
public PartialStreamResponse(Stream sourceStream, int start, int end, string mimeType)
{
this.sourceStream = sourceStream;
this.start = start;
this.end = end;
Contents = populateRequest;
StatusCode = HttpStatusCode.PartialContent;
ContentType = mimeType;
}
private void populateRequest(Stream stream)
{
Console.WriteLine("Begin stream...");
sourceStream.CopyTo(stream, start, end);
Console.WriteLine("End stream");
}
}
EDIT: serving such files should also work for mobile browsers (single file would be preferred over HLS or DASH which require segments)
Well at least Chrome and Firefox happily handle 200 OK responses without Content-Range or Content-Length headers in the response. In my opinion you should only reply with 206 Partial Content if the request range has a valid end marker otherwise just reply with 200 OK without a content length and push the stream. Of course another thing is how to handle the file being generated live. I'd advise having the moov part in a separate file and then generating a second file with the current moof - that way a new client would initially get the moov (which should be fixed) and when that data has been sent the server would just continue reading and serving the moof file, which will be refreshed. Also to escape I/O starvation (the server trying to read the file and whatever generates the content trying to write to it) you could have at least 2 moof files which act as a double buffer - one is the last finished fragment and another is the one currently being written.
Here is an example of request/response headers a working live fragmented video in Chrome:
In my opinion the 206 Partial Content responses are more useful for static video content than for live because in that case the browser can get the moov atom, parse all the size and offset tables and offer seeking while content is being loaded, which is not possible for a live video.
Use multiple requests (one per fragment) or use chunked transfer.
I'm currently developing for an environment that has poor network connectivity. My application helps to automatically download required Google Drive files for users. It works reasonably well for small files (ranging from 40KB to 2MB), but fails far too often for larger files (9MB). I know these file sizes might seem small, but in terms of my client's network environment, Google Drive API constantly fails with the 9MB file.
I've concluded that I need to download files in smaller byte chunks, but I don't see how I can do that with Google Drive API. I've read this over and over again, and I've tried the following code:
// with the Drive File ID, and the appropriate export MIME type, I create the export request
var request = DriveService.Files.Export(fileId, exportMimeType);
// take the message so I can modify it by hand
var message = request.CreateRequest();
var client = request.Service.HttpClient;
// I change the Range headers of both the client, and message
client.DefaultRequestHeaders.Range =
message.Headers.Range =
new System.Net.Http.Headers.RangeHeaderValue(100, 200);
var response = await request.Service.HttpClient.SendAsync(message);
// if status code = 200, copy to local file
if (response.IsSuccessStatusCode)
{
using (var fileStream = new FileStream(downloadFileName, FileMode.CreateNew, FileAccess.ReadWrite))
{
await response.Content.CopyToAsync(fileStream);
}
}
The resultant local file (from fileStream) however, is still full-length (i.e. 40KB file for the 40KB Drive file, and a 500 Internal Server Error for the 9MB file). On a sidenote, I've also experimented with ExportRequest.MediaDownloader.ChunkSize, but from what I observe it only changes the frequency at which the ExportRequest.MediaDownloader.ProgressChanged callback is called (i.e. callback will trigger every 256KB if ChunkSize is set to 256 * 1024).
How can I proceed?
You seemed to be heading in the right direction. From your last comment, the request will update progress based on the chunk size, so your observation was accurate.
Looking into the source code for MediaDownloader in the SDK the following was found (emphasis mine)
The core download logic. We download the media and write it to an
output stream ChunkSize bytes at a time, raising the ProgressChanged
event after each chunk. The chunking behavior is largely a historical
artifact: a previous implementation issued multiple web requests, each
for ChunkSize bytes. Now we do everything in one request, but the API
and client-visible behavior are retained for compatibility.
Your example code will only download one chunk from 100 to 200. Using that approach you would have to keep track of an index and download each chunk manually, copying them to the file stream for each partial download
const int KB = 0x400;
int ChunkSize = 256 * KB; // 256KB;
public async Task ExportFileAsync(string downloadFileName, string fileId, string exportMimeType) {
var exportRequest = driveService.Files.Export(fileId, exportMimeType);
var client = exportRequest.Service.HttpClient;
//you would need to know the file size
var size = await GetFileSize(fileId);
using (var file = new FileStream(downloadFileName, FileMode.CreateNew, FileAccess.ReadWrite)) {
file.SetLength(size);
var chunks = (size / ChunkSize) + 1;
for (long index = 0; index < chunks; index++) {
var request = exportRequest.CreateRequest();
var from = index * ChunkSize;
var to = from + ChunkSize - 1;
request.Headers.Range = new RangeHeaderValue(from, to);
var response = await client.SendAsync(request);
if (response.StatusCode == HttpStatusCode.PartialContent || response.IsSuccessStatusCode) {
using (var stream = await response.Content.ReadAsStreamAsync()) {
file.Seek(from, SeekOrigin.Begin);
await stream.CopyToAsync(file);
}
}
}
}
}
private async Task<long> GetFileSize(string fileId) {
var file = await driveService.Files.Get(fileId).ExecuteAsync();
var size = file.size;
return size;
}
This code makes some assumptions about the drive api/server.
That the server will allow the multiple requests needed to download the file in chunks. Don't know if requests are throttled.
That the server still accepts the Range header like stated in the developer documenation
Im currently a bit stuck with my c# project.
I have 2 applications, they both have a common class definition I call a NetMessage
a NetMessage contains a MessageType string property, as well as 2 List lists.
The idea is that I can pack this class with classes, and data to send across the network as a byte[].
Because Network Streams do not advertise the amount of data they are receiving, I modified my Send method to send the size of the NetMessage byte[] ahead of the actual byte[].
private static byte[] ReceivedBytes(NetworkStream MainStream)
{
try
{
//byte[] myReadBuffer = new byte[1024];
int receivedDataLength = 0;
byte[] data = { };
long len = 0;
int i = 0;
MainStream.ReadTimeout = 60000;
//MainStream.CanTimeout = false;
if (MainStream.CanRead)
{
//Read the length of the incoming message
byte[] byteLen = new byte[8];
MainStream.Read(byteLen, 0, 8);
len = BitConverter.ToInt64(byteLen, 0);
data = new byte[len];
//data is now set to the appropriate size for the expected message
//While we have not got the full message
//Read each individual byte and append to data.
//This method, seems to work, but is ridiculously slow,
while (receivedDataLength < data.Length)
{
receivedDataLength += MainStream.Read(data, receivedDataLength, 1);
}
//receivedDataLength += MainStream.Read(data, receivedDataLength, data.Length);
return data;
}
}
catch (Exception E)
{
//System.Windows.Forms.MessageBox.Show("Exception:" + E.ToString());
}
return null;
}
I have tried to change the size argument below to something like 1024 or to be the data.Length, but I get funky results.
receivedDataLength += MainStream.Read(data, receivedDataLength, 1);
setting it to data.Length seems to cause problems when the Class being sent is a few mb in size.
Setting the buffer size to be 1024 like I have seen in other examples, causes failures when the size of the incoming message is small, like 843 bytes, it errors out saying that I tried to read out of bounds or something.
Below is the type of method being used to send the data in the first place.
public static void SendBytesToStream(NetworkStream TheStream, byte[] TheMessage)
{
//IAsyncResult r = TheStream.BeginWrite(TheMessage, 0, TheMessage.Length, null, null);
// r.AsyncWaitHandle.WaitOne(10000);
//TheStream.EndWrite(r);
try
{
long len = TheMessage.Length;
byte[] Bytelen = BitConverter.GetBytes(len);
TheStream.Write(Bytelen, 0, Bytelen.Length);
TheStream.Flush();
// <-- I've tried putting thread sleeps in this spot to see if it helps
//I've also tried writing each byte of the message individually
//takes longer, but seems more accurate as far as network transmission goes?
TheStream.Write(TheMessage, 0, TheMessage.Length);
TheStream.Flush();
}
catch (Exception e)
{
//System.Windows.Forms.MessageBox.Show(e.ToString());
}
}
I'd like to get these two methods setup to the point where they are reliably sending and receiving data.
The application I am using this for, monitors a screenshots folder in a game directory,
when it detects a screenshot in TGA format, it converts it to PNG, then takes its byte[] and sends it to the receiver.
The receiver then posts it to Facebook (I don't want my FB tokens distributed in my client application), hence the server / proxy idea.
Its strange, but when I step through the code, the transfer is invariably successful.
But if I run it full speed, no breakpoint, it typically tells me that the connection was closed by the remote host etc.
The client typically finishes sending the data almost instantly, even though its a 4mb file.
The receiver spends about 2 minutes reading from the Network Stream, which doesnt make sense, if the client finished sending the data, does that mean the data is just floating in cyber space, and being pulled down?
Surely it should be synchronous?
I suspect I know where my code was going wrong.
It turns out that the scope I was creating my TCPClient that was doing the sending, was declared and instantiated within a method.
This being the case, the GAC was disposing of it, even though the Receiving Server had not finished downloading the stream.
I managed to resolve it by creating a method that can detect when the Client has disconnected on the server end, and until it has actually disconnected, it will keep looping/waiting until disconnected.
This way, we are waiting until the server lets go of us.
In my application I can download some media files from web. Normally I used WebClient.OpenReadCompleted method to download, decrypt and save the file to IsolatedStorage. It worked well and looked like that:
private void downloadedSong_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e, SomeOtherValues someOtherValues) // delegate, uses additional values
{
// Some preparations
try
{
if (e.Result != null)
{
using (isolatedStorageFile = IsolatedStorageFile.GetUserStoreForApplication())
{
// working with the gained stream, decryption
// saving the decrypted file to isolatedStorage
isolatedStorageFileStream = new IsolatedStorageFileStream("SomeFileNameHere", FileMode.OpenOrCreate, isolatedStorageFile);
// and use it for MediaElement
mediaElement.SetSource(isolatedStorageFileStream);
mediaElement.Position = new TimeSpan(0);
mediaElement.MediaOpened += new RoutedEventHandler(mediaFile_MediaOpened);
// and some other work
}
}
}
catch(Exception ex)
{
// try/catch stuff
}
}
But after some investigation I found out that with large files(for me it's more than 100 MB) I'm getting OutOfMemory exception during downloading this file. I suppose that's because WebClient.OpenReadCompleted loads the whole stream into RAM and chokes... And I will need more memory to decrypt this stream.
After another investigation, I've found how to divide large file into chunks after OpenReadCompleted event at saving this file to IsolatedStorage(or decryption and then saving in my ocasion), but this would help with only a part of problem... The primary problem is how to prevent phone chokes during download process. Is there a way to download large file in chunks? Then I could use the found solution to pass through decryption process. (and still I'd need to find a way to load such big file into mediaElement, but that would be another question)
Answer:
private WebHeaderCollection headers;
private int iterator = 0;
private int delta = 1048576;
private string savedFile = "testFile.mp3";
// some preparations
// Start downloading first piece
using (IsolatedStorageFile isolatedStorageFile = IsolatedStorageFile.GetUserStoreForApplication())
{
if (isolatedStorageFile.FileExists(savedFile))
isolatedStorageFile.DeleteFile(savedFile);
}
headers = new WebHeaderCollection();
headers[HttpRequestHeader.Range] = "bytes=" + iterator.ToString() + '-' + (iterator + delta).ToString();
webClientReadCompleted = new WebClient();
webClientReadCompleted.Headers = headers;
webClientReadCompleted.OpenReadCompleted += downloadedSong_OpenReadCompleted;
webClientReadCompleted.OpenReadAsync(new Uri(song.Link));
// song.Link was given earlier
private void downloadedSong_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e)
{
try
{
if (e.Cancelled == false)
{
if (e.Result != null)
{
((WebClient)sender).OpenReadCompleted -= downloadedSong_OpenReadCompleted;
using (IsolatedStorageFile myIsolatedStorage = IsolatedStorageFile.GetUserStoreForApplication())
{
using (IsolatedStorageFileStream fileStream = new IsolatedStorageFileStream(savedFile, FileMode.Append, FileAccess.Write, myIsolatedStorage))
{
int mediaFileLength = (int)e.Result.Length;
byte[] byteFile = new byte[mediaFileLength];
e.Result.Read(byteFile, 0, byteFile.Length);
fileStream.Write(byteFile, 0, byteFile.Length);
// If there's something left, download it recursively
if (byteFile.Length > delta)
{
iterator = iterator + delta + 1;
headers = new WebHeaderCollection();
headers[HttpRequestHeader.Range] = "bytes=" + iterator.ToString() + '-' + (iterator + delta).ToString();
webClientReadCompleted.Headers = headers;
webClientReadCompleted.OpenReadCompleted += downloadedSong_OpenReadCompleted;
webClientReadCompleted.OpenReadAsync(new Uri(song.Link));
}
}
}
}
}
}
To download a file in chunks you'll need to make multiple requests. One for each chunk.
Unfortunately it's not possible to say "get me this file and return it in chunks of size X";
Assuming that the server supports it, you can use the HTTP Range header to specify which bytes of a file the server should return in response to a request.
You then make multiple requests to get the file in pieces and then put it all back together on the device. You'll probably find it simplest to make sequential calls and start the next one once you've got and verified the previous chunk.
This approach makes it simple to resume a download when the user returns to the app. You just look at how much was downloaded previously and then get the next chunk.
I've written an app which downloads movies (up to 2.6GB) in 64K chunks and then played them back from IsolatedStorage with the MediaPlayerLauncher. Playing via the MediaElement should work too but I haven't verified. You can test this by loading a large file directly into IsolatedStorage (via Isolated Storage Explorer, or similar) and check the memory implications of playing that way.
Confirmed: You can use BackgroundTransferRequest to download multi-GB files but you must set TransferPreferences to None to force the download to happen while connected to an external power supply and while connected to wi-fi, else the BackgroundTransferRequest will fail.
I wonder if it's possible to use a BackgroundTransferRequest to download large files easily and let the phone worry about the implementation details? The documentation seems to suggest that file downloads over 100 MB are possible, and the "Range" verb is reserved for it's own use, so it probably uses this automatically if it can behind the scenes.
From the documentation regarding files over 100 MB:
For files larger than 100 MB, you must set the TransferPreferences
property of the transfer to None or the transfer will fail. If you do
not know the size of a transfer and it is possible that it could
exceed this limit, you should set the value to None, meaning that the
transfer will only proceed when the phone is connected to external
power and has a Wi-Fi connection.
From the documentation regarding use of the "Range" verb:
The Headers property of the BackgroundTransferRequest object is used
to set the HTTP headers for a transfer request. The following headers
are reserved for use by the system and cannot be used by calling
applications. Adding one of the following headers to the Headers
collection will cause a NotSupportedException to be thrown when the
Add(BackgroundTransferRequest) method is used to queue the transfer
request:
If-Modified-Since
If-None-Match
If-Range
Range
Unless-Modified-Since
Here's the documentation:
http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202955(v=vs.105).aspx