Asynchrounous Chunked File Upload in C# - c#

I'm trying to asynchronously upload file chunks from a .Net client to a php web server using HttpClient in C#.
I can chunk the file just fine, and upload those chunks to the remote server, but I'm not sure if this is really running asynchronously. Ideally, I would upload the chunks in parallel to maximize upload speed. My code is as follows:
//Call to chunk and upload file in another async method. I'm only showing the call here:
FileStream fileStream = new FileStream(fileNameIn, FileMode.Open, FileAccess.Read);
await ChunkFileAsync(fileStream, uploadFile.Name, url);
// To chunk the file
public static async Task<bool> ChunkFileAsync(FileStream fileStream, string fileName, string url)
{
int chunkSize = 102400; // Upload 1mb at a time
int totalChunks = (int)Math.Ceiling((double)fileStream.Length / chunkSize);
// Loop through the whole stream and send it chunk by chunk asynchronously;
bool retVal = true;
List<Task> tasks = new List<Task>();
try {
for (int i = 0; i < totalChunks; i++)
{
int startIndex = i * chunkSize;
int endIndex = (int)(startIndex + chunkSize > fileStream.Length ? fileStream.Length : startIndex + chunkSize);
int length = endIndex - startIndex;
byte[] bytes = new byte[length];
fileStream.Read(bytes, 0, bytes.Length);
Task task = SendChunkRequest(fileName, url, i, bytes);
tasks.Add(task);
retVal = true;
}
await Task.WhenAll(tasks);
}
catch (WebException e)
{
Console.WriteLine("ERROR - There was an error chunking the file before sending the request " + e.Message);
retVal = false;
}
return retVal;
}
//To upload chunks to remote server
public static async Task<bool> SendChunkRequest(string fileName, string url, int loopCounter, Byte[] bytes)
{
bool response = false;
try
{
ASCIIEncoding ascii = new ASCIIEncoding();
ByteArrayContent data = new ByteArrayContent(bytes);
data.Headers.ContentType = System.Net.Http.Headers.MediaTypeHeaderValue.Parse("multipart/form-data");
byte[] strParamsBytes = ascii.GetBytes(strVals);
HttpClient requestToServer = new HttpClient();
MultipartFormDataContent form = new MultipartFormDataContent();
form.Add(data, "file", fileName + loopCounter);
await requestToServer.PostAsync(url, form);
requestToServer.Dispose();
response = true;
}
catch (Exception e)
{
Console.WriteLine("There was an exception: " + e);
}
return response;
}
If I upload a 100Mb file, I see all ten chunks uploaded to the server one at a time. Can I make this code more efficient? Any help is greatly appreciated.

Related

Read large files - 2GB+ for Google Drive API Upload

I'm currently working on a small backup tool written in C# that is supposed to upload files contained within a specified folder to Google Drive via its API. The program largely functions as it's supposed to, the only problem that it is unable to handle files larger than 2GB.
The problem is caused by the upload function itself which is attached down below, it uses a byte array to read the file to subsequently create a Memory Stream. As far as I'm aware (I'm still a beginner when it comes to c#), a byte array can only contain 2GB of information before returning an overflow exception. To combat this I've tried to utilize FileStream.Read (second bit of code attached below) instead of System.IO.File.ReadAllBytes, though this again lead to an overflow exception of the byte Array. I know that at this point I'd have to split the file up, however, due to the rather limited documentation of the GDrive API for C# - at least from what I've seen - and my limited knowledge of C# I've got little to no clue on how to tackle this problem.
I'm sorry for the long read, all help on this matter is highly appreciated.
Upload Function V1 (System.IO.File.ReadAllBytes):
private static Google.Apis.Drive.v3.Data.File UploadFile(Boolean useFolder, String mime, DriveService _service, string _uploadFile, string _parent, string _descrp = "")
{
if (System.IO.File.Exists(_uploadFile))
{
Google.Apis.Drive.v3.Data.File body = new Google.Apis.Drive.v3.Data.File
{
Name = System.IO.Path.GetFileName(_uploadFile),
Description = _descrp,
MimeType = mime
};
if (useFolder)
{
body.Parents = new List<string> { _parent };
}
byte[] byteArray = System.IO.File.ReadAllBytes(_uploadFile);
MemoryStream stream = new System.IO.MemoryStream(byteArray);
try
{
FilesResource.CreateMediaUpload request = _service.Files.Create(body, stream, mime);
request.SupportsTeamDrives = true;
request.Upload();
return request.ResponseBody;
}
catch (Exception e)
{
Console.WriteLine("Error Occured: " + e);
return null;
}
}
else
{
Console.WriteLine("The file does not exist. 404");
return null;
}
}
Upload Method V2 (FileStream):
private static Google.Apis.Drive.v3.Data.File UploadFile(Boolean useFolder, String mime, DriveService _service, string _uploadFile, string _parent, string _descrp = "")
{
if (System.IO.File.Exists(_uploadFile))
{
Google.Apis.Drive.v3.Data.File body = new Google.Apis.Drive.v3.Data.File
{
Name = System.IO.Path.GetFileName(_uploadFile),
Description = _descrp,
MimeType = mime
};
if (useFolder)
{
body.Parents = new List<string> { _parent };
}
//byte[] byteArray = System.IO.File.ReadAllBytes(_uploadFile);
using (FileStream fileStream = new FileStream(_uploadFile, FileMode.Open, FileAccess.Read))
{
Console.WriteLine("ByteArrayStart");
byte[] byteArray = new byte[fileStream.Length];
int bytesToRead = (int)fileStream.Length;
int bytesRead = 0;
while (bytesRead > 0)
{
int n = fileStream.Read(byteArray, bytesRead, bytesToRead);
if (n == 0)
{
break;
}
bytesRead += n;
Console.WriteLine("Bytes Read: " + bytesRead);
bytesToRead -= n;
Console.WriteLine("Bytes to Read: " + bytesToRead);
}
bytesToRead = byteArray.Length;
MemoryStream stream = new System.IO.MemoryStream(byteArray);
try
{
FilesResource.CreateMediaUpload request = _service.Files.Create(body, stream, mime);
request.SupportsTeamDrives = true;
request.Upload();
return request.ResponseBody;
}
catch (Exception e)
{
Console.WriteLine("Error Occured: " + e);
return null;
}
}
}
else
{
Console.WriteLine("The file does not exist. 404");
return null;
}
}
MemoryStream's constructors only work with byte arrays that are limited to Int32.MaxValue bytes. Why not just use your FileStream object directly?
var fileMetadata = new Google.Apis.Drive.v3.Data.File()
{
Name = "flag.jpg"
};
FilesResource.CreateMediaUpload request;
using (var stream = new System.IO.FileStream(#"C:\temp\flag.jpg", System.IO.FileMode.Open))
{
request = service.Files.Create(fileMetadata, stream, "image/jpeg");
request.Fields = "id";
request.Upload();
}
var file = request.ResponseBody;
Really a file that big you should be using resumable upload but im going to have to dig around for some sample code for that.

File download in chunks in http-context response C#

I have a below scenario.
Client send request to Server-1 for file download
Server-1 send request to Server-2 for file.
To make this work I need to create a mechanism where once client send request to the Server-1, Server-1 will request to Server-2 which will send file as response output-stream in chunks. Server-1 will send this file chunks to client browser continuously as it keep receiving from server-2.
I have done code as below, theoretically it looks fine but still it is not working.
It is not downloading entire file in client browser, it seems like last chunk is not transferred to the Server-1 or it is not downloading to client browser from Server-1
Server-1 Code (Where client request for File download)
private void ProccesBufferedResponse(HttpWebRequest webRequest, HttpContext context)
{
char[] responseChars = null;
byte[] buffer = null;
if (webRequest == null)
logger.Error("Request string is null for Perfios Docs Download at ProccesBufferedResponse()");
context.Response.Buffer = false;
context.Response.BufferOutput = false;
try
{
WebResponse webResponse = webRequest.GetResponse();
context.Response.ContentType = "application/pdf";
context.Response.AddHeader("Content-disposition", webResponse.Headers["Content-disposition"]);
StreamReader responseStream = new StreamReader(webResponse.GetResponseStream());
while (!responseStream.EndOfStream)
{
responseChars = new char[responseStream.ToString().ToCharArray().Length];
responseStream.Read(responseChars, 0, responseChars.Length);
buffer = Encoding.ASCII.GetBytes(responseChars);
context.Response.Clear();
context.Response.OutputStream.Write(buffer, 0, buffer.Length);
context.Response.Flush();
}
}
catch (Exception ex)
{
throw;
}
finally
{
context.Response.Flush();
context.Response.End();
}
}
Server-2 Code (Where Server-1 will send request for file)
private void DownloadInstaPerfiosDoc(int CompanyID, string fileName, string Foldertype)
{
string folderPath;
string FilePath;
int chunkSize = 1024;
int startIndex = 0;
int endIndex = 0;
int length = 0;
byte[] bytes = null;
DirectoryInfo dir;
folderPath = GetDocumentDirectory(CompanyID, Foldertype);
FilePath = folderPath + "\\" + fileName;
dir = new DirectoryInfo(folderPath);
HttpContext.Current.Response.Buffer = false;
HttpContext.Current.Response.BufferOutput = false;
if (dir.Exists && dir.GetFiles().Length > 0)
{
foreach (var file in dir.GetFiles(fileName))
{
FilePath = folderPath + "\\" + file.Name;
FileStream fsReader = new FileStream(FilePath, FileMode.Open, FileAccess.Read);
HttpContext.Current.Response.ContentType = "application/pdf";
HttpContext.Current.Response.AddHeader("Content-disposition", string.Format("attachment; filename = \"{0}\"", fileName));
int totalChunks = (int)Math.Ceiling((double)fsReader.Length / chunkSize);
for (int i = 0; i < totalChunks; i++)
{
startIndex = i * chunkSize;
if (startIndex + chunkSize > fsReader.Length)
endIndex = (int)fsReader.Length;
else
endIndex = startIndex + chunkSize;
length = (int)endIndex - startIndex;
bytes = new byte[length];
fsReader.Read(bytes, 0, bytes.Length);
HttpContext.Current.Response.Clear();
HttpContext.Current.Response.OutputStream.Write(bytes, 0, bytes.Length);
HttpContext.Current.Response.Flush();
}
}
}
}
Please help me to resolve this issue.
It is possible and feasible. I'll give a pseudo procedure for you to understand the overall idea.
Server1
download action gets hit
create a request to server2
get the response stream of your server2 request
read the response stream in desired chunk sizes until it's consumed completely
write each chunk (as soon as you read) to current response stream
Server2
download action gets hit
write your stream onto your current response stream however you like

C# Download big file from Server with less memory consumption

I have a big file of memory size 42 mb. I want to download the file with less memory consumption.
Controller Code
public ActionResult Download()
{
var filePath = "file path in server";
FileInfo file = new FileInfo(filePath);
Response.ContentType = "application/zip";
Response.AppendHeader("Content-Disposition", "attachment; filename=folder.zip");
Response.TransmitFile(file.FullName);
Response.End();
}
alernative method tried with Stream
public ActionResult Download()
{
string failure = string.Empty;
Stream stream = null;
int bytesToRead = 10000;
long LengthToRead;
try
{
var path = "file path from server";
FileWebRequest fileRequest = (FileWebRequest)FileWebRequest.Create(path);
FileWebResponse fileResponse = (FileWebResponse)fileRequest.GetResponse();
if (fileRequest.ContentLength > 0)
fileResponse.ContentLength = fileRequest.ContentLength;
//Get the Stream returned from the response
stream = fileResponse.GetResponseStream();
LengthToRead = stream.Length;
//Indicate the type of data being sent
Response.ContentType = "application/octet-stream";
//Name the file
Response.AddHeader("Content-Disposition", "attachment; filename=SolutionWizardDesktopClient.zip");
Response.AddHeader("Content-Length", fileResponse.ContentLength.ToString());
int length;
do
{
// Verify that the client is connected.
if (Response.IsClientConnected)
{
byte[] buffer = new Byte[bytesToRead];
// Read data into the buffer.
length = stream.Read(buffer, 0, bytesToRead);
// and write it out to the response's output stream
Response.OutputStream.Write(buffer, 0, length);
// Flush the data
Response.Flush();
//Clear the buffer
LengthToRead = LengthToRead - length;
}
else
{
// cancel the download if client has disconnected
LengthToRead = -1;
}
} while (LengthToRead > 0); //Repeat until no data is read
}
finally
{
if (stream != null)
{
//Close the input stream
stream.Close();
}
Response.End();
Response.Close();
}
return View("Failed");
}
due to size of the file, it is consumpting more memory which leads to performance issue.
After checking in iis log, the download process is taking 42 mb and 64 mb each respectively.
Thanks in advance
A better option would be to use FileResult instead of ActionResult:
Using this method means you don't have to load the file/bytes in memory before serving.
public FileResult Download()
{
var filePath = "file path in server";
return new FilePathResult(Server.MapPath(filePath), "application/zip");
}
Edit: For larger files FilePathResult will also fail.
Your best bet is probably Response.TransmitFile() then. I've used this on larger files (GBs) and had no issues before
public ActionResult Download()
{
var filePath = #"file path from server";
Response.Clear();
Response.ContentType = "application/octet-stream";
Response.AppendHeader("Content-Disposition", "filename=" + filePath);
Response.TransmitFile(filePath);
Response.End();
return Index();
}
From MSDN:
Writes the specified file directly to an HTTP response output stream,
without buffering it in memory.
Try setting the Transfer-Encoding header to chunked, and return an HttpResponseMessage with a PushStreamContent. Transfer-Encoding of chunked means that the HTTP response will not have a Content-Length header, and so the client will have to parse the chunks of the HTTP response as a stream. Note, I've never run across a client (browser, etc) that didn't handle Transfer Encoding chunked. You can read more at the link below.
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Transfer-Encoding
[HttpGet]
public async Task<HttpResponseMessage> Download(CancellationToken token)
{
var response = new HttpResponseMessage(System.Net.HttpStatusCode.OK)
{
Content = new PushStreamContent(async (stream, context, transportContext) =>
{
try
{
using (var fileStream = System.IO.File.OpenRead("some path to MyBigDownload.zip"))
{
await fileStream.CopyToAsync(stream);
}
}
finally
{
stream.Close();
}
}, "application/octet-stream"),
};
response.Headers.TransferEncodingChunked = true;
response.Content.Headers.ContentDisposition = new System.Net.Http.Headers.ContentDispositionHeaderValue("attachment")
{
FileName = "MyBigDownload.zip"
};
return response;
}
I had similar problem but I didn't have file on local disk, I had to download it from API (my MVC was like a proxy).
The key thing is to set Response.Buffer=false; on your MVC Action. I think #JanusPienaar's first solution should work with this.
My MVC action is:
public class HomeController : Controller
{
public async Task<FileStreamResult> Streaming(long RecordCount)
{
HttpClient Client;
System.IO.Stream Stream;
//This is the key thing
Response.Buffer=false;
Client = new HttpClient() { BaseAddress=new Uri("http://MyApi", };
Stream = await Client.GetStreamAsync("api/Streaming?RecordCount="+RecordCount);
return new FileStreamResult(Stream, "text/csv");
}
}
And my test WebApi (which generates the file) is:
public class StreamingController : ApiController
{
// GET: api/Streaming/5
public HttpResponseMessage Get(long RecordCount)
{
var response = Request.CreateResponse();
response.Content=new PushStreamContent((stream, http, transport) =>
{
RecordsGenerator Generator = new RecordsGenerator();
long i;
using(var writer = new System.IO.StreamWriter(stream, System.Text.Encoding.UTF8))
{
for(i=0; i<RecordCount; i++)
{
writer.Write(Generator.GetRecordString(i));
if(0==(i&0xFFFFF))
System.Diagnostics.Debug.WriteLine($"Record no: {i:N0}");
}
}
});
return response;
}
class RecordsGenerator
{
const string abc = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
char[] Chars = new char[14];//Ceiling(log26(2^63))
public string GetRecordString(long Record)
{
int iLength = 0;
long Div = Record, Mod;
do
{
iLength++;
Div=Math.DivRem(Div, abc.Length, out Mod);
//Save from backwards
Chars[Chars.Length-iLength]=abc[(int)Mod];
}
while(Div!=0);
return $"{Record} {new string(Chars, Chars.Length-iLength, iLength)}\r\n";
}
}
}
}
If RecordCount is 100000000, the file generated by TestApi is 1.56 GB. Neither WebApi nor MVC consumes so much memory.
There is the Rizwan Ansari post that worked for me:
There are situation when you need to provide download option for a big file located somewhere on server or generated at runtime. Below function could be used to download files of any size. Sometimes downloading big file throws exception OutOfMemoryException showing “Insufficient memory to continue execution of the program”. So this function also handle this situation by breaking down file in 1 MB chunks (can be customized by changing bufferSize variable).
Usage:
DownloadLargeFile("A big file.pdf", "D:\\Big Files\\Big File.pdf", "application/pdf", System.Web.HttpContext.Current.Response);
You can change "application/pdf" by the right Mime type
Download Function:
public static void DownloadLargeFile(string DownloadFileName, string FilePath, string ContentType, HttpResponse response)
{
Stream stream = null;
// read buffer in 1 MB chunks
// change this if you want a different buffer size
int bufferSize = 1048576;
byte[] buffer = new Byte[bufferSize];
// buffer read length
int length;
// Total length of file
long lengthToRead;
try
{
// Open the file in read only mode
stream = new FileStream(FilePath, FileMode.Open, FileAccess.Read, FileShare.Read);
// Total length of file
lengthToRead = stream.Length;
response.ContentType = ContentType;
response.AddHeader("Content-Disposition", "attachment; filename=" + HttpUtility.UrlEncode(DownloadFileName, System.Text.Encoding.UTF8));
while (lengthToRead > 0)
{
// Verify that the client is connected.
if (response.IsClientConnected)
{
// Read the data in buffer
length = stream.Read(buffer, 0, bufferSize);
// Write the data to output stream.
response.OutputStream.Write(buffer, 0, length);
// Flush the data
response.Flush();
//buffer = new Byte[10000];
lengthToRead = lengthToRead - length;
}
else
{
// if user disconnects stop the loop
lengthToRead = -1;
}
}
}
catch (Exception exp)
{
// handle exception
response.ContentType = "text/html";
response.Write("Error : " + exp.Message);
}
finally
{
if (stream != null)
{
stream.Close();
}
response.End();
response.Close();
}
}
you just have to Using IIS to Enable HTTP Downloads look at this link
and you just need to return the HTTP path of the file it will be download so fast and so easy.

Simple async I/O handler is dropping requests

I've written a small .ashx handler in C#, with straight-forward logic:
Generate a random payload string of 54 kilobytes.
Get a unique filename to store the data.
Write the string to file in async manner.
Read back from that file in async manner.
Send that string back to response stream.
The idea is to throw multiple concurrent requests to the above handler using apache-bench, so that I can compare ASP.NET 4.5 against other frameworks (like nodejs) for a large-size app I'm going to develop which is heavily I/O bound. Here is the code for the Handler:
public class Handler : System.Web.IHttpHandler
{
private StringBuilder payload = null;
private async void processAsync()
{
var r = new Random ();
//generate a random string of 108kb
payload=new StringBuilder();
for (var i = 0; i < 54000; i++)
payload.Append( (char)(r.Next(65,90)));
//create a unique file
var fname = "";
do{fname = "c:\\source\\csharp\\asyncdemo\\" + r.Next (1, 99999999).ToString () + ".txt";
} while(File.Exists(fname));
//write the string to disk in async manner
using(FileStream fs = File.Open(fname,FileMode.CreateNew,FileAccess.ReadWrite))
{
var bytes=(new System.Text.ASCIIEncoding ()).GetBytes (payload.ToString());
await fs.WriteAsync (bytes,0,bytes.Length);
fs.Close ();
}
//read the string back from disk in async manner
payload = new StringBuilder ();
StreamReader sr = new StreamReader (fname);
payload.Append(await sr.ReadToEndAsync ());
sr.Close ();
//File.Delete (fname); //remove the file
}
public void ProcessRequest (HttpContext context)
{
Task task = new Task(processAsync);
task.Start ();
task.Wait ();
//write the string back on the response stream
context.Response.ContentType = "text/plain";
context.Response.Write (payload.ToString());
}
public bool IsReusable
{
get {
return false;
}
}
}
The trouble is that the Handler runs perfectly when I compile it and run from my browser once. But when I send concurrent requests to it using ab, about half the requests get dropped. Not only that, but its way too slow compared to a similar script I've written in node.js - I've tested it both on IIS-7/Windows and Mono/Linux environments. Do you suggest ASP.NET is inherently slower compared to node.js to handle heavy async I/O load?
I've reimplemented my code in the following manner, and it works now:
private StringBuilder payload = null;
private async void processAsync()
{
var r = new Random (DateTime.Now.Ticks.GetHashCode());
//generate a random string of 108kb
payload=new StringBuilder();
for (var i = 0; i < 54000; i++)
payload.Append( (char)(r.Next(65,90)));
//create a unique file
var fname = "";
do
{
//fname = #"c:\source\csharp\asyncdemo\" + r.Next (1, 99999999).ToString () + ".txt";
fname = r.Next (1, 99999999).ToString () + ".txt";
} while(File.Exists(fname));
//write the string to disk in async manner
using(FileStream fs = new FileStream(fname,FileMode.CreateNew,FileAccess.Write, FileShare.None,
bufferSize: 4096, useAsync: true))
{
var bytes=(new System.Text.ASCIIEncoding ()).GetBytes (payload.ToString());
await fs.WriteAsync (bytes,0,bytes.Length);
fs.Close ();
}
//read the string back from disk in async manner
payload = new StringBuilder ();
//FileStream ;
//payload.Append(await fs.ReadToEndAsync ());
using (var fs = new FileStream (fname, FileMode.Open, FileAccess.Read,
FileShare.Read, bufferSize: 4096, useAsync: true)) {
int numRead;
byte[] buffer = new byte[0x1000];
while ((numRead = await fs.ReadAsync (buffer, 0, buffer.Length)) != 0) {
payload.Append (Encoding.Unicode.GetString (buffer, 0, numRead));
}
}
//fs.Close ();
//File.Delete (fname); //remove the file
}
public void ProcessRequest (HttpContext context)
{
Task task = new Task(processAsync);
task.Start ();
task.Wait ();
//write the string back on the response stream
context.Response.ContentType = "text/plain";
context.Response.Write (payload.ToString());
}

Can't download complete image file from skydrive using REST API

I'm working on a quick wrapper for the skydrive API in C#, but running into issues with downloading a file. For the first part of the file, everything comes through fine, but then there start to be differences in the file and shortly thereafter everything becomes null. I'm fairly sure that it's just me not reading the stream correctly.
This is the code I'm using to download the file:
public const string ApiVersion = "v5.0";
public const string BaseUrl = "https://apis.live.net/" + ApiVersion + "/";
public SkyDriveFile DownloadFile(SkyDriveFile file)
{
string uri = BaseUrl + file.ID + "/content";
byte[] contents = GetResponse(uri);
file.Contents = contents;
return file;
}
public byte[] GetResponse(string url)
{
checkToken();
Uri requestUri = new Uri(url + "?access_token=" + HttpUtility.UrlEncode(token.AccessToken));
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(requestUri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
Stream responseStream = response.GetResponseStream();
byte[] contents = new byte[response.ContentLength];
responseStream.Read(contents, 0, (int)response.ContentLength);
return contents;
}
This is the image file I'm trying to download
And this is the image I am getting
These two images lead me to believe that I'm not waiting for the response to finish coming through, because the content-length is the same as the size of the image I'm expecting, but I'm not sure how to make my code wait for the entire response to come through or even really if that's the approach I need to take.
Here's my test code in case it's helpful
[TestMethod]
public void CanUploadAndDownloadFile()
{
var api = GetApi();
SkyDriveFolder folder = api.CreateFolder(null, "TestFolder", "Test Folder");
SkyDriveFile file = api.UploadFile(folder, TestImageFile, "TestImage.png");
file = api.DownloadFile(file);
api.DeleteFolder(folder);
byte[] contents = new byte[new FileInfo(TestImageFile).Length];
using (FileStream fstream = new FileStream(TestImageFile, FileMode.Open))
{
fstream.Read(contents, 0, contents.Length);
}
using (FileStream fstream = new FileStream(TestImageFile + "2", FileMode.CreateNew))
{
fstream.Write(file.Contents, 0, file.Contents.Length);
}
Assert.AreEqual(contents.Length, file.Contents.Length);
bool sameData = true;
for (int i = 0; i < contents.Length && sameData; i++)
{
sameData = contents[i] == file.Contents[i];
}
Assert.IsTrue(sameData);
}
It fails at Assert.IsTrue(sameData);
This is because you don't check the return value of responseStream.Read(contents, 0, (int)response.ContentLength);. Read doesn't ensure that it will read response.ContentLength bytes. Instead it returns the number of bytes read. You can use a loop or stream.CopyTo there.
Something like this:
WebResponse response = request.GetResponse();
MemoryStream m = new MemoryStream();
response.GetResponseStream().CopyTo(m);
byte[] contents = m.ToArray();
As LB already said, you need to continue to call Read() until you have read the entire stream.
Although Stream.CopyTo will copy the entire stream it does not ensure that read the number of bytes expected. The following method will solve this and raise an IOException if it does not read the length specified...
public static void Copy(Stream input, Stream output, long length)
{
byte[] bytes = new byte[65536];
long bytesRead = 0;
int len = 0;
while (0 != (len = input.Read(bytes, 0, Math.Min(bytes.Length, (int)Math.Min(int.MaxValue, length - bytesRead)))))
{
output.Write(bytes, 0, len);
bytesRead = bytesRead + len;
}
output.Flush();
if (bytesRead != length)
throw new IOException();
}

Categories

Resources