Partially download and serialize big file in C#? - c#

As part of an upcoming project at my university, I need to write a client that downloads a media file from a server and writes it to the local disk. Since these files can be very large, I need to implement partial download and serialization in order to avoid excessive memory use.
What I came up with:
namespace PartialDownloadTester
{
using System;
using System.Diagnostics.Contracts;
using System.IO;
using System.Net;
using System.Text;
public class DownloadClient
{
public static void Main(string[] args)
{
var dlc = new DownloadClient(args[0], args[1], args[2]);
dlc.DownloadAndSaveToDisk();
Console.ReadLine();
}
private WebRequest request;
// directory of file
private string dir;
// full file identifier
private string filePath;
public DownloadClient(string uri, string fileName, string fileType)
{
this.request = WebRequest.Create(uri);
this.request.Method = "GET";
var sb = new StringBuilder();
sb.Append("C:\\testdata\\DownloadedData\\");
this.dir = sb.ToString();
sb.Append(fileName + "." + fileType);
this.filePath = sb.ToString();
}
public void DownloadAndSaveToDisk()
{
// make sure directory exists
this.CreateDir();
var response = (HttpWebResponse)request.GetResponse();
Console.WriteLine("Content length: " + response.ContentLength);
var rStream = response.GetResponseStream();
int bytesRead = -1;
do
{
var buf = new byte[2048];
bytesRead = rStream.Read(buf, 0, buf.Length);
rStream.Flush();
this.SerializeFileChunk(buf);
}
while (bytesRead != 0);
}
private void CreateDir()
{
if (!Directory.Exists(dir))
{
Directory.CreateDirectory(dir);
}
}
private void SerializeFileChunk(byte[] bytes)
{
Contract.Requires(!Object.ReferenceEquals(bytes, null));
FileStream fs = File.Open(filePath, FileMode.Append);
fs.Write(bytes, 0, bytes.Length);
fs.Flush();
fs.Close();
}
}
}
For testing purposes, I've used the following parameters:
"http://itu.dk/people/janv/mufc_abc.jpg" "mufc_abc" "jpg"
However, the picture is incomplete (only the first ~10% look right) even though the content length prints 63780 which is the actual size of the image.
So my questions are:
Is this the right way to go for partial download and serialization or is there a better/easier approach?
Is the full content of the response stream stored in client memory? If this is the case, do I need to use HttpWebRequest.AddRange to partially download data from the server in order to conserve my client's memory?
How come the serialization fails and I get a broken image?
Do I introduce a lot of overhead when I use the FileMode.Append? (msdn states that this option "seeks to the end of the file")
Thanks in advance

You could definitely simplify your code using a WebClient:
class Program
{
static void Main()
{
DownloadClient("http://itu.dk/people/janv/mufc_abc.jpg", "mufc_abc.jpg");
}
public static void DownloadClient(string uri, string fileName)
{
using (var client = new WebClient())
{
using (var stream = client.OpenRead(uri))
{
// work with chunks of 2KB => adjust if necessary
const int chunkSize = 2048;
var buffer = new byte[chunkSize];
using (var output = File.OpenWrite(fileName))
{
int bytesRead;
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
}
}
}
}
Notice how I am writing only the number of bytes I have actually read from the socket to the output file and not the entire 2KB buffer.

I don't know if this is the source of the problem, however I would change the loop like this
const int ChunkSize = 2048;
var buf = new byte[ChunkSize];
var rStream = response.GetResponseStream();
do {
int bytesRead = rStream.Read(buf, 0, ChunkSize);
if (bytesRead > 0) {
this.SerializeFileChunk(buf, bytesRead);
}
} while (bytesRead == ChunkSize);
The serialize method would get an additional argument
private void SerializeFileChunk(byte[] bytes, int numBytes)
and then write the right number of bytes
fs.Write(bytes, 0, numBytes);
UPDATE:
I do not see the need for closing and reopening the file each time. I also would use the using statement, which closes the resources, even if an exception should occur. The using statement calls the Dispose() method of the resource at the end, which in turn calls Close() in the case of file streams. using can be applied to all types implementing IDisposable.
var buf = new byte[2048];
using (var rStream = response.GetResponseStream()) {
using (FileStream fs = File.Open(filePath, FileMode.Append)) {
do {
bytesRead = rStream.Read(buf, 0, buf.Length);
fs.Write(bytes, 0, bytesRead);
} while (...);
}
}
The using statement does something like this
{
var rStream = response.GetResponseStream();
try
{
// do some work with rStream here.
} finally {
if (rStream != null) {
rStream.Dispose();
}
}
}

Here is the solution from Microsoft: http://support.microsoft.com/kb/812406
Updated 2021-03-16: seems the original article is not available now. Here is the archived one: https://mskb.pkisolutions.com/kb/812406

Related

Correct way to use GZipStream in dotNET C#

I'm working with GZipStream at the moment using .net 3.5.
I have two methods listed below. As input file I use text file which consists of chars 's'. Size of the file is 2MB. This code works fine if I use .net 4.5 but with .net 3.5 after compress and decompress I get file of size 435KB which of course isn't the same with source file.
If I try to decompress file via WinRAR it is also looks good (the same with source file).
If I try decompress file using GZipStream from .net4.5 (file compressed via GZipStream from .net 3.5) the result is bad.
UPD:
In general I really need to read the file as several separate gzip chunks, in this case all the bytes of copressed files are read at one call of the Read() method so I still don't understand why decompressing doesn't works.
public void CompressFile()
{
string fileIn = #"D:\sin2.txt";
string fileOut = #"D:\sin2.txt.pgz";
using (var fout = File.Create(fileOut))
{
using (var fin = File.OpenRead(fileIn))
{
using (var zip = new GZipStream(fout, CompressionMode.Compress))
{
var buffer = new byte[1024 * 1024 * 10];
int n = fin.Read(buffer, 0, buffer.Length);
zip.Write(buffer, 0, n);
}
}
}
}
public void DecompressFile()
{
string fileIn = #"D:\sin2.txt.pgz";
string fileOut = #"D:\sin2.1.txt";
using (var fsout = File.Create(fileOut))
{
using (var fsIn = File.OpenRead(fileIn))
{
var buffer = new byte[1024 * 1024 * 10];
int n;
while ((n = fsIn.Read(buffer, 0, buffer.Length)) > 0)
{
using (var ms = new MemoryStream(buffer, 0, n))
{
using (var zip = new GZipStream(ms, CompressionMode.Decompress))
{
int nRead = zip.Read(buffer, 0, buffer.Length);
fsout.Write(buffer, 0, nRead);
}
}
}
}
}
}
You're trying to decompress each "chunk" as if it's a separate gzip file. Don't do that - just read from the GZipStream in a loop:
using (var fsout = File.Create(fileOut))
{
using (var fsIn = File.OpenRead(fileIn))
{
using (var zip = new GZipStream(fsIn, CompressionMode.Decompress))
{
var buffer = new byte[1024 * 32];
int bytesRead;
while ((bytesRead = zip.Read(buffer, 0, buffer.Length)) > 0)
{
fsout.Write(buffer, 0, bytesRead);
}
}
}
}
Note that your compression code should look similar, reading in a loop rather than assuming a single call to Read will read all the data.
(Personally I'd skip fsIn, and just use new GZipStream(File.OpenRead(fileIn)) but that's just a personal preference.)
First, as #Jon Skeet mentioned, you are not using Stream.Read method correctly. It doesn't matter if your buffer is big enough or not, the stream is allowed to return less bytes than requested, with zero indicating no more, so reading from stream should always be performed in a loop.
However the main problem in your decompress code is the way you share the buffer. Your read the input into a buffer, than wrap it in a MemoryStream (note that the constructor used does not make a copy of the passed array, but actually sets it as it's internal buffer), and then you try to read and write to that buffer at the same time. Taking into account that decompressing writes data "faster" than reading, it's surprising that your code works at all.
The correct implementation is quite simple
static void CompressFile()
{
string fileIn = #"D:\sin2.txt";
string fileOut = #"D:\sin2.txt.pgz";
using (var input = File.OpenRead(fileIn))
using (var output = new GZipStream(File.Create(fileOut), CompressionMode.Compress))
Write(input, output);
}
static void DecompressFile()
{
string fileIn = #"D:\sin2.txt.pgz";
string fileOut = #"D:\sin2.1.txt";
using (var input = new GZipStream(File.OpenRead(fileIn), CompressionMode.Decompress))
using (var output = File.Create(fileOut))
Write(input, output);
}
static void Write(Stream input, Stream output, int bufferSize = 10 * 1024 * 1024)
{
var buffer = new byte[bufferSize];
for (int readCount; (readCount = input.Read(buffer, 0, buffer.Length)) > 0;)
output.Write(buffer, 0, readCount);
}

Setting Buffer Size in GZipStream

I was writing a light-weight proxy in c#. When I was decoding the gzip contentEncoding I noted that if I use a small buffer-size(4096) the stream is decoded partially depending the size of the input. Is it a bug in my code or something which is needed to make it work? I set the buffer to 10 MB, and it works okay but defeats my purpose of writing a light-weight proxy.
response = webEx.Response as HttpWebResponse;
Stream input = response.GetResponseStream();
//some other operations on response header
//calling DecompressGzip here
private static string DecompressGzip(Stream input, Encoding e)
{
StringBuilder sb = new StringBuilder();
using (Ionic.Zlib.GZipStream decompressor = new Ionic.Zlib.GZipStream(input, Ionic.Zlib.CompressionMode.Decompress))
{
// works okay for [1024*1024*8];
byte[] buffer = new byte[4096];
int n = 0;
do
{
n = decompressor.Read(buffer, 0, buffer.Length);
if (n > 0)
{
sb.Append(e.GetString(buffer));
}
} while (n > 0);
}
return sb.ToString();
}
Actually, I figured it out. I guess using the string builder causes the problem; instead, I used a memory stream and it works well.
private static string DecompressGzip(Stream input, Encoding e)
{
using (Ionic.Zlib.GZipStream decompressor = new Ionic.Zlib.GZipStream(input, Ionic.Zlib.CompressionMode.Decompress))
{
int read = 0;
var buffer = new byte[4096];
using (MemoryStream output = new MemoryStream())
{
while ((read = decompressor.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, read);
}
return e.GetString(output.ToArray());
}
}
}

Can't download complete image file from skydrive using REST API

I'm working on a quick wrapper for the skydrive API in C#, but running into issues with downloading a file. For the first part of the file, everything comes through fine, but then there start to be differences in the file and shortly thereafter everything becomes null. I'm fairly sure that it's just me not reading the stream correctly.
This is the code I'm using to download the file:
public const string ApiVersion = "v5.0";
public const string BaseUrl = "https://apis.live.net/" + ApiVersion + "/";
public SkyDriveFile DownloadFile(SkyDriveFile file)
{
string uri = BaseUrl + file.ID + "/content";
byte[] contents = GetResponse(uri);
file.Contents = contents;
return file;
}
public byte[] GetResponse(string url)
{
checkToken();
Uri requestUri = new Uri(url + "?access_token=" + HttpUtility.UrlEncode(token.AccessToken));
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(requestUri);
request.Method = WebRequestMethods.Http.Get;
WebResponse response = request.GetResponse();
Stream responseStream = response.GetResponseStream();
byte[] contents = new byte[response.ContentLength];
responseStream.Read(contents, 0, (int)response.ContentLength);
return contents;
}
This is the image file I'm trying to download
And this is the image I am getting
These two images lead me to believe that I'm not waiting for the response to finish coming through, because the content-length is the same as the size of the image I'm expecting, but I'm not sure how to make my code wait for the entire response to come through or even really if that's the approach I need to take.
Here's my test code in case it's helpful
[TestMethod]
public void CanUploadAndDownloadFile()
{
var api = GetApi();
SkyDriveFolder folder = api.CreateFolder(null, "TestFolder", "Test Folder");
SkyDriveFile file = api.UploadFile(folder, TestImageFile, "TestImage.png");
file = api.DownloadFile(file);
api.DeleteFolder(folder);
byte[] contents = new byte[new FileInfo(TestImageFile).Length];
using (FileStream fstream = new FileStream(TestImageFile, FileMode.Open))
{
fstream.Read(contents, 0, contents.Length);
}
using (FileStream fstream = new FileStream(TestImageFile + "2", FileMode.CreateNew))
{
fstream.Write(file.Contents, 0, file.Contents.Length);
}
Assert.AreEqual(contents.Length, file.Contents.Length);
bool sameData = true;
for (int i = 0; i < contents.Length && sameData; i++)
{
sameData = contents[i] == file.Contents[i];
}
Assert.IsTrue(sameData);
}
It fails at Assert.IsTrue(sameData);
This is because you don't check the return value of responseStream.Read(contents, 0, (int)response.ContentLength);. Read doesn't ensure that it will read response.ContentLength bytes. Instead it returns the number of bytes read. You can use a loop or stream.CopyTo there.
Something like this:
WebResponse response = request.GetResponse();
MemoryStream m = new MemoryStream();
response.GetResponseStream().CopyTo(m);
byte[] contents = m.ToArray();
As LB already said, you need to continue to call Read() until you have read the entire stream.
Although Stream.CopyTo will copy the entire stream it does not ensure that read the number of bytes expected. The following method will solve this and raise an IOException if it does not read the length specified...
public static void Copy(Stream input, Stream output, long length)
{
byte[] bytes = new byte[65536];
long bytesRead = 0;
int len = 0;
while (0 != (len = input.Read(bytes, 0, Math.Min(bytes.Length, (int)Math.Min(int.MaxValue, length - bytesRead)))))
{
output.Write(bytes, 0, len);
bytesRead = bytesRead + len;
}
output.Flush();
if (bytesRead != length)
throw new IOException();
}

HttpWebResponse stripping newline characters

I'm using the following code to read the response:
using (Stream MyResponseStream = hwresponse.GetResponseStream())
{
byte[] MyBuffer = new byte[4096];
int BytesRead;
while (0 < (BytesRead = MyResponseStream.Read(MyBuffer, 0, MyBuffer.Length)))
{
ByteArrayToFile("request.txt", MyBuffer, BytesRead);
}
}
This is the function to write to a file:
public void ByteArrayToFile(string _FileName, byte[] _ByteArray, int BytesRead)
{
System.IO.FileStream _FileStream = new System.IO.FileStream(_FileName, System.IO.FileMode.Append, System.IO.FileAccess.Write);
_FileStream.Write(_ByteArray, 0, BytesRead);
_FileStream.Close();
}
If I use webclient, I get new lines everything is parsed correctly. When I use HttpWebResponse new line characters get stripped (not all, but 80%). Any hints why this is happening? Thanks!
You could just use the following code to write the entire response to a file:
using (StreamReader MyResponseStream = new StreamReader(hwresponse.GetResponseStream()))
{
using (StreamWriter _FileStream = new StreamWriter("request.txt", true))
{
_FileStream.Write(MyResponseStream.ReadToEnd());
}
}

Downloading and saving excel file

i am using: `private void get_stocks_data()
{
byte[] result;
byte[] buffer = new byte[4096];
WebRequest wr = WebRequest.Create("http://www.tase.co.il/TASE/Pages/ExcelExport.aspx?sn=he-IL_ds&enumTblType=AllSecurities&Columns=he-IL_Columns&Titles=he-IL_Titles&TblId=0&ExportType=1");
using (WebResponse response = wr.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
using (MemoryStream memoryStream = new MemoryStream())
{
int count = 0;
do
{
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, count);
} while (count != 0);
result = memoryStream.ToArray();
write_data_to_excel(result);
}
}
}`
to download the excel file,
And this method to fill the file on my computer:
private void write_data_to_excel(byte[] input)
{
StreamWriter str = new StreamWriter("stockdata.xls");
for (int i = 0; input.Length > i; i++)
{
str.WriteLine(input[i].ToString());
}
str.Close();
}
The result is that i get a lot of numbers...
What am i doing wrong? the file i am downloadin is excel version 2003, on my computer i have 2007...
Thanks.
I would suggest that you use WebClient.DownloadFile() instead.
This is a higher level method that will abstract from creating the request manually, dealing with encoding, etc.
Problem is in your Write_data_to_excel function
as you are using StreamWriter.WriteLine method it needs string,
you are passing byte as string so your binary value say 10 will be now string 10
try FileStream f = File.OpenWrite("stockdata.xlsx");
f.Write(input,0,input.Length); this will work.

Categories

Resources