Incomplete video streaming to mobile browsers - HTTP headers - c#

I am trying to stream mp4 (fragmented) video to a mobile browser client using Nancy MVC framework. Everything works fine. The code is enclosed.
The thing is, a video is going to generated simultaneously as being streamed, so the stream.Length will increase over period of time. Does someone knows what to do to support this scenario ?
(I have tried to commit the length in "content-range" header, giving an arbitrary max size to encompass whole video size, but no avail...)
/*called when /video is requested*/
Get["/video"] = _ =>
{
if (Request.Headers.Keys.Contains("Range"))
return Response.FromPartialStream(Request, File.OpenRead("../Page/video-fragmented.mp4"), "video/mp4");
else
/*from stream...*/
};
public static Response FromPartialStream(this IResponseFormatter f,
Request req, Stream stream,
string contentType)
{
const string BYTES_RANGE_HEADER = "Range";
if (req.Headers[BYTES_RANGE_HEADER].Count() != 1)
throw new NotSupportedException();
var rangeStr = req.Headers[BYTES_RANGE_HEADER].FirstOrDefault();
var range = rangeStr.Replace("bytes=", String.Empty)
.Split(new string[] { "-" }, StringSplitOptions.RemoveEmptyEntries)
.Select(x => Int32.Parse(x))
.ToArray();
var start = (range.Length > 0) ? range[0] : 0;
var end = (range.Length > 1) ? range[1] : (int)(stream.Length - 1);
var res = new PartialStreamResponse(stream, start, end, contentType)
.WithHeader("Connection", "keep-alive")
.WithHeader("Accept-Ranges", "bytes")
.WithHeader("Content-Range", "bytes " + start + "-" + end + "/" + stream.Length)
.WithHeader("Content-Length", (end - start + 1).ToString());
Console.WriteLine("Requested range: {0}", rangeStr);
return res;
}
public class PartialStreamResponse : Response
{
Stream sourceStream = null;
int start, end;
public PartialStreamResponse(Stream sourceStream, int start, int end, string mimeType)
{
this.sourceStream = sourceStream;
this.start = start;
this.end = end;
Contents = populateRequest;
StatusCode = HttpStatusCode.PartialContent;
ContentType = mimeType;
}
private void populateRequest(Stream stream)
{
Console.WriteLine("Begin stream...");
sourceStream.CopyTo(stream, start, end);
Console.WriteLine("End stream");
}
}
EDIT: serving such files should also work for mobile browsers (single file would be preferred over HLS or DASH which require segments)

Well at least Chrome and Firefox happily handle 200 OK responses without Content-Range or Content-Length headers in the response. In my opinion you should only reply with 206 Partial Content if the request range has a valid end marker otherwise just reply with 200 OK without a content length and push the stream. Of course another thing is how to handle the file being generated live. I'd advise having the moov part in a separate file and then generating a second file with the current moof - that way a new client would initially get the moov (which should be fixed) and when that data has been sent the server would just continue reading and serving the moof file, which will be refreshed. Also to escape I/O starvation (the server trying to read the file and whatever generates the content trying to write to it) you could have at least 2 moof files which act as a double buffer - one is the last finished fragment and another is the one currently being written.
Here is an example of request/response headers a working live fragmented video in Chrome:
In my opinion the 206 Partial Content responses are more useful for static video content than for live because in that case the browser can get the moov atom, parse all the size and offset tables and offer seeking while content is being loaded, which is not possible for a live video.

Use multiple requests (one per fragment) or use chunked transfer.

Related

How best to respond to an open HTTP range request

I am currently going through the process of attempting to respond to HTTP Range Requests so that video can be streamed from our server and also satisfy Safari actually playing the video.
I do have the added complication that the video file is encrypted on disk preventing me from being able to Seek in the stream but that is not really within the scope of this question.
I have noticed that Chrome and at least the new Edge request open ranges (0 - ). What is the correct thing to respond with here? Currently I respond with the full video which I understand completely defeats the purpose of streaming.
var range = context.Request.Headers["Range"].Split('=', '-');
var startByteIndex = int.Parse(range[1]);
// Quite a few browers send open ended requests for the range (e.g. 0- ).
long endByteIndex;
if (!long.TryParse(range[2], out endByteIndex))
{
endByteIndex = streamLength - 1;
}
Below is my full attempt so far at responding.
if (!string.IsNullOrEmpty(context.Request.Headers["Range"]))
{
var range = context.Request.Headers["Range"].Split('=', '-');
var startByteIndex = int.Parse(range[1]);
// Quite a few browers send open ended requests for the range (e.g. 0- ).
long endByteIndex;
if (!long.TryParse(range[2], out endByteIndex))
{
endByteIndex = streamLength - 1;
}
Debug.WriteLine("Range request for " + context.Request.Headers["Range"]);
// Make sure the request is within the bounds of the video.
if (endByteIndex >= streamLength)
{
context.Response.StatusCode = (int)HttpStatusCode.RequestedRangeNotSatisfiable;
return false;
}
var currentIndex = 0;
// SEEKING IS NOT WHOLE AND COMPLETE.
// Get to the requested start. We are not allowed to seek with CBC + AES.
while (currentIndex < startByteIndex) // TODO: we could probably work out a more suitable buffer size here to get to the start index.
{
var dummy = new byte[bufferLength];
var a = videoReadStream.Read(dummy, 0, bufferLength);
currentIndex += bufferLength;
}
// Fast but unreliable given AES + CBC.
//fileStream.Seek(startByteIndex, SeekOrigin.Begin);
dataToRead = endByteIndex - startByteIndex + 1;
// Supply the relevant partial content headers.
context.Response.StatusCode = (int)HttpStatusCode.PartialContent;
context.Response.AddHeader("Content-Range", $"bytes {startByteIndex}-{endByteIndex}/{streamLength}");
context.Response.AddHeader("Content-Length", dataToRead.ToString());
}
else
{
context.Response.AddHeader("Cache-Control", "private, max-age=1200");
context.Response.Cache.SetExpires(DateTime.Now.AddMinutes(20));
context.Response.AddHeader("content-disposition", "inline;filename=" + fileID);
context.Response.AddHeader("Accept-Ranges", "bytes");
}
var buffer = new byte[bufferLength];
while (dataToRead > 0 && context.Response.IsClientConnected)
{
videoReadStream.Read(buffer, 0, bufferLength);
// Write the data to the current output stream.
context.Response.OutputStream.Write(buffer, 0, bufferLength);
// Flush the data to the HTML output.
context.Response.Flush();
buffer = new byte[bufferLength];
dataToRead -= bufferLength;
}
Most frustratingly I notice that Edge (I haven't tested others yet) always seem to send an open request
Range request for bytes=0-
Range request for bytes=1867776-
Range request for bytes=3571712-
Range request for bytes=5341184-
Range request for bytes=7176192-
Range request for bytes=9273344-
Range request for bytes=10977280-
Range request for bytes=12943360-
Range request for bytes=14614528-
Range request for bytes=16384000-
Range request for bytes=18087936-
Range request for bytes=19955712-
Range request for bytes=21823488-
Range request for bytes=23625728-
Range request for bytes=25690112-
Range request for bytes=27525120-
Range request for bytes=39256064-
Range request for bytes=41222144-
Range request for bytes=42270720-
Should I just decide on a chunk size to respond with and stick with that? I notice that if I did respond with chunks of just 3 in size then Edge does request a lot more ranges in increments of 3.
This is a similar, but not identical, question to What byte range 0- means.
What is the correct thing to respond with here?
The correct thing to respond with here is the entire resource. However, as this client included a Range header, you may also respond with a 206 Partial content and a subset of the file.
Should I just decide on a chunk size to respond with and stick with that?
Pretty much, based on what's efficient for your server. Note that, as in Firefox won't request further data after receiving 206 with specified content range, you may encounter browsers that don't do the right thing.

Uploading Large Files to WCF from Xamarin Android App Crashes

I'm trying to upload a large video (1 GB+) from my xamarin app and it keeps crashing once it reaches about 0.5 GB of my file. The only way I've found to get the videos to post to my WCF service while sending data along with it is using the MultiPart logic but I'm not sure if I'm running out of memory or what because even in debug mode, it simply crashes without any real error message.
I'm trying to run it on a native device (not a sim) and it's a Samsung Galaxy S9 with Android 9.
Here's the upload code that I'm using: (p.s. - as a test, I tried putting the WriteAsync into a for loop thinking that maybe trying to write the whole gig was the problem, but the result was the same. That's why you'll see the MAXFILESIZEPART constant in there which is just an int equal to 10000000.)
private async Task<byte[]> GetMultipartFormDataAsync(Dictionary<string, object> postParameters, string boundary)
{
try
{
using (Stream formDataStream = new System.IO.MemoryStream())
{
bool needsCLRF = false;
foreach (var param in postParameters)
{
// Thanks to feedback from commenters, add a CRLF to allow multiple parameters to be added.
// Skip it on the first parameter, add it to subsequent parameters.
if (needsCLRF)
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes("\r\n"), 0, Encoding.UTF8.GetByteCount("\r\n"));
needsCLRF = true;
if (param.Value is FileParameter)
{
FileParameter fileToUpload = (FileParameter)param.Value;
// Add just the first part of this param, since we will write the file data directly to the Stream
string header = string.Format("--{0}\r\nContent-Disposition: form-data; name=\"{1}\"; filename=\"{2}\"\r\nContent-Type: {3}\r\n\r\n",
boundary,
param.Key,
fileToUpload.FileName ?? param.Key,
fileToUpload.ContentType ?? "application/octet-stream");
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes(header), 0, Encoding.UTF8.GetByteCount(header));
// Write the file data directly to the Stream, rather than serializing it to a string.
if (fileToUpload.File.Length > MAXFILESIZEPART)
{
for (var i = 0; i < fileToUpload.File.Length; i += MAXFILESIZEPART)
{
var len = i + MAXFILESIZEPART > fileToUpload.File.Length
? fileToUpload.File.Length - i
: MAXFILESIZEPART;
await formDataStream.WriteAsync(fileToUpload.File, i, len);
}
}
else
{
await formDataStream.WriteAsync(fileToUpload.File, 0, fileToUpload.File.Length);
}
}
else
{
string postData = string.Format("--{0}\r\nContent-Disposition: form-data; name=\"{1}\"\r\n\r\n{2}",
boundary,
param.Key,
param.Value);
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes(postData), 0, Encoding.UTF8.GetByteCount(postData));
}
}
// Add the end of the request. Start with a newline
string footer = "\r\n--" + boundary + "--\r\n";
await formDataStream.WriteAsync(Encoding.UTF8.GetBytes(footer), 0, Encoding.UTF8.GetByteCount(footer));
// Dump the Stream into a byte[]
formDataStream.Position = 0;
byte[] formData = new byte[formDataStream.Length];
formDataStream.Read(formData, 0, formData.Length);
return formData;
}
}
catch (Exception e)
{
Console.WriteLine(e);
throw;
}
}
And it's eventually failing on the following line
await formDataStream.WriteAsync(fileToUpload.File, i, len);
but only after a certain point (about 500MB) so I'm assuming it's a memory issue but it doesn't say so. Is there a better way to accomplish this task? I'm doing it so that it also records the progress as the upload happens. I'm trying to accomplish something similar to uploading large videos via the facebook app so that it will upload in the background while you continue working. It works great when working with smaller files (i.e. - < 500 MB) but this is the first time I've tried a file that was almost a gig in size.
NOTE: This happens BEFORE it starts posting anything to the server so it's not IIS or WCF related. This code crashes just writing the bytes to the memory stream.
Any suggestions?
Thanks!
According to your description, the service will stop at a certain time point, and because the file you transfer is about 1G, it is likely to be sendtimeout.No transfer completed within the specified time, causing exception。The SendTimeout that specifies how long the write operation has to complete before timing out. The default value is 1 minute.
I set sendtimeout to 15 seconds in my configuration file.If the data takes more than 15 seconds, an exception will occur. You can set it to a higher value to avoid timeout and exception.
For information about sendtimeout, please refer to the following link:
https://learn.microsoft.com/en-us/dotnet/api/system.servicemodel.channels.binding.sendtimeout?view=dotnet-plat-ext-3.1
UPDATE
I think it might be a memory overflow problem.Large file may cause memory overflow, unable to read at the same time.
You can refer to the following links for solutions
https://learn.microsoft.com/en-us/archive/blogs/johan/are-you-getting-outofmemoryexceptions-when-uploading-large-files

C# - Downloading from Google Drive in byte chunks

I'm currently developing for an environment that has poor network connectivity. My application helps to automatically download required Google Drive files for users. It works reasonably well for small files (ranging from 40KB to 2MB), but fails far too often for larger files (9MB). I know these file sizes might seem small, but in terms of my client's network environment, Google Drive API constantly fails with the 9MB file.
I've concluded that I need to download files in smaller byte chunks, but I don't see how I can do that with Google Drive API. I've read this over and over again, and I've tried the following code:
// with the Drive File ID, and the appropriate export MIME type, I create the export request
var request = DriveService.Files.Export(fileId, exportMimeType);
// take the message so I can modify it by hand
var message = request.CreateRequest();
var client = request.Service.HttpClient;
// I change the Range headers of both the client, and message
client.DefaultRequestHeaders.Range =
message.Headers.Range =
new System.Net.Http.Headers.RangeHeaderValue(100, 200);
var response = await request.Service.HttpClient.SendAsync(message);
// if status code = 200, copy to local file
if (response.IsSuccessStatusCode)
{
using (var fileStream = new FileStream(downloadFileName, FileMode.CreateNew, FileAccess.ReadWrite))
{
await response.Content.CopyToAsync(fileStream);
}
}
The resultant local file (from fileStream) however, is still full-length (i.e. 40KB file for the 40KB Drive file, and a 500 Internal Server Error for the 9MB file). On a sidenote, I've also experimented with ExportRequest.MediaDownloader.ChunkSize, but from what I observe it only changes the frequency at which the ExportRequest.MediaDownloader.ProgressChanged callback is called (i.e. callback will trigger every 256KB if ChunkSize is set to 256 * 1024).
How can I proceed?
You seemed to be heading in the right direction. From your last comment, the request will update progress based on the chunk size, so your observation was accurate.
Looking into the source code for MediaDownloader in the SDK the following was found (emphasis mine)
The core download logic. We download the media and write it to an
output stream ChunkSize bytes at a time, raising the ProgressChanged
event after each chunk. The chunking behavior is largely a historical
artifact: a previous implementation issued multiple web requests, each
for ChunkSize bytes. Now we do everything in one request, but the API
and client-visible behavior are retained for compatibility.
Your example code will only download one chunk from 100 to 200. Using that approach you would have to keep track of an index and download each chunk manually, copying them to the file stream for each partial download
const int KB = 0x400;
int ChunkSize = 256 * KB; // 256KB;
public async Task ExportFileAsync(string downloadFileName, string fileId, string exportMimeType) {
var exportRequest = driveService.Files.Export(fileId, exportMimeType);
var client = exportRequest.Service.HttpClient;
//you would need to know the file size
var size = await GetFileSize(fileId);
using (var file = new FileStream(downloadFileName, FileMode.CreateNew, FileAccess.ReadWrite)) {
file.SetLength(size);
var chunks = (size / ChunkSize) + 1;
for (long index = 0; index < chunks; index++) {
var request = exportRequest.CreateRequest();
var from = index * ChunkSize;
var to = from + ChunkSize - 1;
request.Headers.Range = new RangeHeaderValue(from, to);
var response = await client.SendAsync(request);
if (response.StatusCode == HttpStatusCode.PartialContent || response.IsSuccessStatusCode) {
using (var stream = await response.Content.ReadAsStreamAsync()) {
file.Seek(from, SeekOrigin.Begin);
await stream.CopyToAsync(file);
}
}
}
}
}
private async Task<long> GetFileSize(string fileId) {
var file = await driveService.Files.Get(fileId).ExecuteAsync();
var size = file.size;
return size;
}
This code makes some assumptions about the drive api/server.
That the server will allow the multiple requests needed to download the file in chunks. Don't know if requests are throttled.
That the server still accepts the Range header like stated in the developer documenation

Download file in chunks (Windows Phone)

In my application I can download some media files from web. Normally I used WebClient.OpenReadCompleted method to download, decrypt and save the file to IsolatedStorage. It worked well and looked like that:
private void downloadedSong_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e, SomeOtherValues someOtherValues) // delegate, uses additional values
{
// Some preparations
try
{
if (e.Result != null)
{
using (isolatedStorageFile = IsolatedStorageFile.GetUserStoreForApplication())
{
// working with the gained stream, decryption
// saving the decrypted file to isolatedStorage
isolatedStorageFileStream = new IsolatedStorageFileStream("SomeFileNameHere", FileMode.OpenOrCreate, isolatedStorageFile);
// and use it for MediaElement
mediaElement.SetSource(isolatedStorageFileStream);
mediaElement.Position = new TimeSpan(0);
mediaElement.MediaOpened += new RoutedEventHandler(mediaFile_MediaOpened);
// and some other work
}
}
}
catch(Exception ex)
{
// try/catch stuff
}
}
But after some investigation I found out that with large files(for me it's more than 100 MB) I'm getting OutOfMemory exception during downloading this file. I suppose that's because WebClient.OpenReadCompleted loads the whole stream into RAM and chokes... And I will need more memory to decrypt this stream.
After another investigation, I've found how to divide large file into chunks after OpenReadCompleted event at saving this file to IsolatedStorage(or decryption and then saving in my ocasion), but this would help with only a part of problem... The primary problem is how to prevent phone chokes during download process. Is there a way to download large file in chunks? Then I could use the found solution to pass through decryption process. (and still I'd need to find a way to load such big file into mediaElement, but that would be another question)
Answer:
private WebHeaderCollection headers;
private int iterator = 0;
private int delta = 1048576;
private string savedFile = "testFile.mp3";
// some preparations
// Start downloading first piece
using (IsolatedStorageFile isolatedStorageFile = IsolatedStorageFile.GetUserStoreForApplication())
{
if (isolatedStorageFile.FileExists(savedFile))
isolatedStorageFile.DeleteFile(savedFile);
}
headers = new WebHeaderCollection();
headers[HttpRequestHeader.Range] = "bytes=" + iterator.ToString() + '-' + (iterator + delta).ToString();
webClientReadCompleted = new WebClient();
webClientReadCompleted.Headers = headers;
webClientReadCompleted.OpenReadCompleted += downloadedSong_OpenReadCompleted;
webClientReadCompleted.OpenReadAsync(new Uri(song.Link));
// song.Link was given earlier
private void downloadedSong_OpenReadCompleted(object sender, OpenReadCompletedEventArgs e)
{
try
{
if (e.Cancelled == false)
{
if (e.Result != null)
{
((WebClient)sender).OpenReadCompleted -= downloadedSong_OpenReadCompleted;
using (IsolatedStorageFile myIsolatedStorage = IsolatedStorageFile.GetUserStoreForApplication())
{
using (IsolatedStorageFileStream fileStream = new IsolatedStorageFileStream(savedFile, FileMode.Append, FileAccess.Write, myIsolatedStorage))
{
int mediaFileLength = (int)e.Result.Length;
byte[] byteFile = new byte[mediaFileLength];
e.Result.Read(byteFile, 0, byteFile.Length);
fileStream.Write(byteFile, 0, byteFile.Length);
// If there's something left, download it recursively
if (byteFile.Length > delta)
{
iterator = iterator + delta + 1;
headers = new WebHeaderCollection();
headers[HttpRequestHeader.Range] = "bytes=" + iterator.ToString() + '-' + (iterator + delta).ToString();
webClientReadCompleted.Headers = headers;
webClientReadCompleted.OpenReadCompleted += downloadedSong_OpenReadCompleted;
webClientReadCompleted.OpenReadAsync(new Uri(song.Link));
}
}
}
}
}
}
To download a file in chunks you'll need to make multiple requests. One for each chunk.
Unfortunately it's not possible to say "get me this file and return it in chunks of size X";
Assuming that the server supports it, you can use the HTTP Range header to specify which bytes of a file the server should return in response to a request.
You then make multiple requests to get the file in pieces and then put it all back together on the device. You'll probably find it simplest to make sequential calls and start the next one once you've got and verified the previous chunk.
This approach makes it simple to resume a download when the user returns to the app. You just look at how much was downloaded previously and then get the next chunk.
I've written an app which downloads movies (up to 2.6GB) in 64K chunks and then played them back from IsolatedStorage with the MediaPlayerLauncher. Playing via the MediaElement should work too but I haven't verified. You can test this by loading a large file directly into IsolatedStorage (via Isolated Storage Explorer, or similar) and check the memory implications of playing that way.
Confirmed: You can use BackgroundTransferRequest to download multi-GB files but you must set TransferPreferences to None to force the download to happen while connected to an external power supply and while connected to wi-fi, else the BackgroundTransferRequest will fail.
I wonder if it's possible to use a BackgroundTransferRequest to download large files easily and let the phone worry about the implementation details? The documentation seems to suggest that file downloads over 100 MB are possible, and the "Range" verb is reserved for it's own use, so it probably uses this automatically if it can behind the scenes.
From the documentation regarding files over 100 MB:
For files larger than 100 MB, you must set the TransferPreferences
property of the transfer to None or the transfer will fail. If you do
not know the size of a transfer and it is possible that it could
exceed this limit, you should set the value to None, meaning that the
transfer will only proceed when the phone is connected to external
power and has a Wi-Fi connection.
From the documentation regarding use of the "Range" verb:
The Headers property of the BackgroundTransferRequest object is used
to set the HTTP headers for a transfer request. The following headers
are reserved for use by the system and cannot be used by calling
applications. Adding one of the following headers to the Headers
collection will cause a NotSupportedException to be thrown when the
Add(BackgroundTransferRequest) method is used to queue the transfer
request:
If-Modified-Since
If-None-Match
If-Range
Range
Unless-Modified-Since
Here's the documentation:
http://msdn.microsoft.com/en-us/library/windowsphone/develop/hh202955(v=vs.105).aspx

How do I remove the header info using an HttpModule to Upload a file?

I've created an HttpModule in ASP.NET to allow users to upload large files. I found some sample code online that I was able to adapt for my needs. I grab the file if it is a multi-part message and then I chunk the bytes and write them to disk.
The problem is that the file is always corrupt. After doing some research, it turns out that for some reason there is HTTP header or message body tags applied to the first part of the bytes I receive. I can't seem to figure out how to parse out those bytes so I only get the file.
Extra data / junk is prepended to the top of the file such as this:
-----------------------8cbb435d6837a3f
Content-Disposition: form-data; name="file"; filename="test.txt"
Content-Type: application/octet-stream
This kind of header information of course corrupts the file I am receiving so I need to get rid of it before I write the bytes.
Here is the code I wrote to handle the upload:
public class FileUploadManager : IHttpModule
{
public int BUFFER_SIZE = 1024;
protected void app_BeginRequest(object sender, EventArgs e)
{
// get the context we are working under
HttpContext context = ((HttpApplication)sender).Context;
// make sure this is multi-part data
if (context.Request.ContentType.IndexOf("multipart/form-data") == -1)
{
return;
}
IServiceProvider provider = (IServiceProvider)context;
HttpWorkerRequest wr =
(HttpWorkerRequest)provider.GetService(typeof(HttpWorkerRequest));
// only process this file if it has a body and is not already preloaded
if (wr.HasEntityBody() && !wr.IsEntireEntityBodyIsPreloaded())
{
// get the total length of the body
int iRequestLength = wr.GetTotalEntityBodyLength();
// get the initial bytes loaded
int iReceivedBytes = wr.GetPreloadedEntityBodyLength();
// open file stream to write bytes to
using (System.IO.FileStream fs =
new System.IO.FileStream(
#"C:\tempfiles\test.txt",
System.IO.FileMode.CreateNew))
{
// *** NOTE: This is where I think I need to filter the bytes
// received to get rid of the junk data but I am unsure how to
// do this?
int bytesRead = BUFFER_SIZE;
// Create an input buffer to store the incomming data
byte[] byteBuffer = new byte[BUFFER_SIZE];
while ((iRequestLength - iReceivedBytes) >= bytesRead)
{
// read the next chunk of the file
bytesRead = wr.ReadEntityBody(byteBuffer, byteBuffer.Length);
fs.Write(byteBuffer, 0, byteBuffer.Length);
iReceivedBytes += bytesRead;
// write bytes so far of file to disk
fs.Flush();
}
}
}
}
}
How would I detect and parse out this header junk information in order to isolate just the file bits?
use InputSteramEntity class as follows:
InputStreamEntity reqEntity = new InputStreamEntity(new FileInputStream(filePath), -1);
reqEntity.setContentType("binary/octet-stream");
httppost.setEntity(reqEntity);
HttpResponse response = httpclient.execute(httppost);
If you use like above, it will not add token to header and trailer and content-disposition, content-type at server
-----------------------8cbb435d6837a3f
Content-Disposition: form-data; name="file"; filename="test.txt"
Content-Type: application/octet-stream
-----------------------8cbb435d6837a3f
What you're running into is the boundary used to separate the various parts of the HTTP request. There should be a header at the beginning of the request called Content-type, and within that header, there's a boundary statement like so:
Content-Type: multipart/mixed;boundary=gc0p4Jq0M2Yt08jU534c0p
Once you find this boundary, simply split your request on the boundary with two hyphens (--) prepended to it. In other words, split your content on:
"--"+Headers.Get("Content-Type").Split("boundary=")[1]
Sorta pseudo-code there, but it should get the point across. This should divide the multipart form data into the appropriate sections.
For more info, see RFC1341
It's worth noting, apparently the final boundary has two hyphens appended to the end of the boundary as well.
EDIT: Okay, so the problem you're running into is that you're not breaking the form data into the necessary components. The sections of a multipart/form-data request can each individually be treated as separate requests (meaning they can contain headers). What you should probably do is read the bytes into a string:
string formData = Encoding.ASCII.GetString(byteBuffer);
split into multiple strings based on the boundary:
string boundary = "\r\n"+context.Request.ContentType.Split("boundary=")[1];
string[] parts = Regex.Split( formData, boundary );
loop through each string, separating headers from content. Since you actually want the byte value of the content, keep track of the data offset since converting from ASCII back to byte might not work properly (I could be wrong, but I'm paranoid):
int dataOffset = 0;
for( int i=0; i < parts.Length; i++ ){
string header = part.Substring( 0, part.IndexOf( "\r\n\r\n" ) );
dataOffset += boundary.Length + header.Length + 4;
string asciiBody = part.Substring( part.IndexOf( "\r\n\r\n" ) + 4 );
byte[] body = new byte[ asciiBody.Length ];
for( int j=dataOffset,k=0; j < asciiBody.Length; j++ ){
body[k++] = byteBuffer[j];
}
// body now contains your binary data
}
NOTE: This is untested, so it may require some tweaking.

Categories

Resources