I'm using the following code to read the response:
using (Stream MyResponseStream = hwresponse.GetResponseStream())
{
byte[] MyBuffer = new byte[4096];
int BytesRead;
while (0 < (BytesRead = MyResponseStream.Read(MyBuffer, 0, MyBuffer.Length)))
{
ByteArrayToFile("request.txt", MyBuffer, BytesRead);
}
}
This is the function to write to a file:
public void ByteArrayToFile(string _FileName, byte[] _ByteArray, int BytesRead)
{
System.IO.FileStream _FileStream = new System.IO.FileStream(_FileName, System.IO.FileMode.Append, System.IO.FileAccess.Write);
_FileStream.Write(_ByteArray, 0, BytesRead);
_FileStream.Close();
}
If I use webclient, I get new lines everything is parsed correctly. When I use HttpWebResponse new line characters get stripped (not all, but 80%). Any hints why this is happening? Thanks!
You could just use the following code to write the entire response to a file:
using (StreamReader MyResponseStream = new StreamReader(hwresponse.GetResponseStream()))
{
using (StreamWriter _FileStream = new StreamWriter("request.txt", true))
{
_FileStream.Write(MyResponseStream.ReadToEnd());
}
}
Related
I am using HttpWebRequest.GetStreamData() to get data from stream and then writing a couple of files in that variable.
Before calling the GetResponse() method on the stream, I need to get the copy of data and put it in text format.
Here's my code snippet
HttpWebRequest wr = (HttpWebRequest)WebRequest.Create(url);
wr.Method = "POST";
string requestbody = "Some Body Text"
Stream rs = wr.GetRequestStream();
foreach (fragment in Documents).ToList())
{
byte[] requestBytes = System.Text.Encoding.UTF8.GetBytes(requestbody);
rs.Write(requestBytes, 0, requestBytes.Length);
FileStream fileStream = new FileStream(fragment.FilePath, FileMode.Open, FileAccess.Read);
byte[] buffer = new byte[4096];
int bytesRead = 0;
while ((bytesRead = fileStream.Read(buffer, 0, buffer.Length)) != 0)
{
rs.Write(buffer, 0, bytesRead);
}
fileStream.Flush();
fileStream.Close();
fileStream = null;
byte[] trailer = System.Text.Encoding.ASCII.GetBytes("Trailer Text");
rs.Write(trailer, 0, trailer.Length);
}
}
using (StreamReader strm = new StreamReader(rs))
{
string text = strm.ReadToEnd();
}
rs.Close();
rs = null;
The issue is I get error "stream was not readable".
When I check the property rs.canread, it is set to false.
Can anyone help me reading the content of my rs stream.
I have tried a couple of methods but none seems to be working as of now.
I was writing a light-weight proxy in c#. When I was decoding the gzip contentEncoding I noted that if I use a small buffer-size(4096) the stream is decoded partially depending the size of the input. Is it a bug in my code or something which is needed to make it work? I set the buffer to 10 MB, and it works okay but defeats my purpose of writing a light-weight proxy.
response = webEx.Response as HttpWebResponse;
Stream input = response.GetResponseStream();
//some other operations on response header
//calling DecompressGzip here
private static string DecompressGzip(Stream input, Encoding e)
{
StringBuilder sb = new StringBuilder();
using (Ionic.Zlib.GZipStream decompressor = new Ionic.Zlib.GZipStream(input, Ionic.Zlib.CompressionMode.Decompress))
{
// works okay for [1024*1024*8];
byte[] buffer = new byte[4096];
int n = 0;
do
{
n = decompressor.Read(buffer, 0, buffer.Length);
if (n > 0)
{
sb.Append(e.GetString(buffer));
}
} while (n > 0);
}
return sb.ToString();
}
Actually, I figured it out. I guess using the string builder causes the problem; instead, I used a memory stream and it works well.
private static string DecompressGzip(Stream input, Encoding e)
{
using (Ionic.Zlib.GZipStream decompressor = new Ionic.Zlib.GZipStream(input, Ionic.Zlib.CompressionMode.Decompress))
{
int read = 0;
var buffer = new byte[4096];
using (MemoryStream output = new MemoryStream())
{
while ((read = decompressor.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, read);
}
return e.GetString(output.ToArray());
}
}
}
I have several .gz files, and I want to decompress them one by one.
I have writen a simple code using GzipStream in C#, but got failed. I wonder a correct and useful method to achieve what I want. Thanks a lot.
private string Extrgz(string infile)
{
string dir = Path.GetDirectoryName(infile);
string decompressionFileName = dir + Path.GetFileNameWithoutExtension(infile) + "_decompression.bin";
using (GZipStream instream = new GZipStream(File.OpenRead(infile), CompressionMode.Compress))// ArgumentException...
{
using (FileStream outputStream = new FileStream(decompressionFileName, FileMode.Append, FileAccess.Write))
{
int bufferSize = 8192, bytesRead = 0;
byte[] buffer = new byte[bufferSize];
while ((bytesRead = instream.Read(buffer, 0, bufferSize)) > 0)
{
outputStream.Write(buffer, 0, bytesRead);
}
}
}
return decompressionFileName;
}
You need to decompress but you set CompressionMode.Compress, replace it with CompressionMode.Decompress.
Example here.
Here:
public static void DeCompressFile(string CompressedFile, string DeCompressedFile)
{
byte[] buffer = new byte[1024 * 1024];
using (System.IO.FileStream fstrmCompressedFile = System.IO.File.OpenRead(CompressedFile)) // fi.OpenRead())
{
using (System.IO.FileStream fstrmDecompressedFile = System.IO.File.Create(DeCompressedFile))
{
using (System.IO.Compression.GZipStream strmUncompress = new System.IO.Compression.GZipStream(fstrmCompressedFile,
System.IO.Compression.CompressionMode.Decompress))
{
int numRead = strmUncompress.Read(buffer, 0, buffer.Length);
while (numRead != 0)
{
fstrmDecompressedFile.Write(buffer, 0, numRead);
fstrmDecompressedFile.Flush();
numRead = strmUncompress.Read(buffer, 0, buffer.Length);
} // Whend
//int numRead = 0;
//while ((numRead = strmUncompress.Read(buffer, 0, buffer.Length)) != 0)
//{
// fstrmDecompressedFile.Write(buffer, 0, numRead);
// fstrmDecompressedFile.Flush();
//} // Whend
strmUncompress.Close();
} // End Using System.IO.Compression.GZipStream strmUncompress
fstrmDecompressedFile.Flush();
fstrmDecompressedFile.Close();
} // End Using System.IO.FileStream fstrmCompressedFile
fstrmCompressedFile.Close();
} // End Using System.IO.FileStream fstrmCompressedFile
} // End Sub DeCompressFile
// http://www.dotnetperls.com/decompress
public static byte[] Decompress(byte[] gzip)
{
byte[] baRetVal = null;
using (System.IO.MemoryStream ByteStream = new System.IO.MemoryStream(gzip))
{
// Create a GZIP stream with decompression mode.
// ... Then create a buffer and write into while reading from the GZIP stream.
using (System.IO.Compression.GZipStream stream = new System.IO.Compression.GZipStream(ByteStream
, System.IO.Compression.CompressionMode.Decompress))
{
const int size = 4096;
byte[] buffer = new byte[size];
using (System.IO.MemoryStream memory = new System.IO.MemoryStream())
{
int count = 0;
count = stream.Read(buffer, 0, size);
while (count > 0)
{
memory.Write(buffer, 0, count);
memory.Flush();
count = stream.Read(buffer, 0, size);
}
baRetVal = memory.ToArray();
memory.Close();
}
stream.Close();
} // End Using System.IO.Compression.GZipStream stream
ByteStream.Close();
} // End Using System.IO.MemoryStream ByteStream
return baRetVal;
} // End Sub Decompress
I'm trying to download a .torrent file (not the contents of the torrent itself) in my .NET application.
Using the following code works for other files, but not .torrent. The resulting files is about 1-3kb smaller than if I download the file via a browser. When opening it in a torrent client, it says the torrent is corrupt.
WebClient web = new WebClient();
web.Headers.Add("Content-Type", "application/x-bittorrent");
web.DownloadFile("http://kat.ph/torrents/linux-mint-12-gnome-mate-dvd-64-bit-t6008958/", "test.torrent");
Opening the URL in a browser results in the file being downloaded correctly.
Any ideas as to why this would happen? Are there any good alternatives to WebClient that would download the file correctly?
EDIT: I've tried this as well as WebClient, and it results in the same thing:
private void DownloadFile(string url, string file)
{
byte[] result;
byte[] buffer = new byte[4096];
WebRequest wr = WebRequest.Create(url);
wr.ContentType = "application/x-bittorrent";
using (WebResponse response = wr.GetResponse())
{
using (Stream responseStream = response.GetResponseStream())
{
using (MemoryStream memoryStream = new MemoryStream())
{
int count = 0;
do
{
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, count);
} while (count != 0);
result = memoryStream.ToArray();
using (BinaryWriter writer = new BinaryWriter(new FileStream(file, FileMode.Create)))
{
writer.Write(result);
}
}
}
}
}
The problem that server returns content compressed by gzip and you download this compressed content to file. For such cases you should check the "Content-Encoding" header and use proper stream reader to decompress the source.
I modified your function to handle gzipped content:
private void DownloadFile(string url, string file)
{
byte[] result;
byte[] buffer = new byte[4096];
WebRequest wr = WebRequest.Create(url);
wr.ContentType = "application/x-bittorrent";
using (WebResponse response = wr.GetResponse())
{
bool gzip = response.Headers["Content-Encoding"] == "gzip";
var responseStream = gzip
? new GZipStream(response.GetResponseStream(), CompressionMode.Decompress)
: response.GetResponseStream();
using (MemoryStream memoryStream = new MemoryStream())
{
int count = 0;
do
{
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, count);
} while (count != 0);
result = memoryStream.ToArray();
using (BinaryWriter writer = new BinaryWriter(new FileStream(file, FileMode.Create)))
{
writer.Write(result);
}
}
}
}
As part of an upcoming project at my university, I need to write a client that downloads a media file from a server and writes it to the local disk. Since these files can be very large, I need to implement partial download and serialization in order to avoid excessive memory use.
What I came up with:
namespace PartialDownloadTester
{
using System;
using System.Diagnostics.Contracts;
using System.IO;
using System.Net;
using System.Text;
public class DownloadClient
{
public static void Main(string[] args)
{
var dlc = new DownloadClient(args[0], args[1], args[2]);
dlc.DownloadAndSaveToDisk();
Console.ReadLine();
}
private WebRequest request;
// directory of file
private string dir;
// full file identifier
private string filePath;
public DownloadClient(string uri, string fileName, string fileType)
{
this.request = WebRequest.Create(uri);
this.request.Method = "GET";
var sb = new StringBuilder();
sb.Append("C:\\testdata\\DownloadedData\\");
this.dir = sb.ToString();
sb.Append(fileName + "." + fileType);
this.filePath = sb.ToString();
}
public void DownloadAndSaveToDisk()
{
// make sure directory exists
this.CreateDir();
var response = (HttpWebResponse)request.GetResponse();
Console.WriteLine("Content length: " + response.ContentLength);
var rStream = response.GetResponseStream();
int bytesRead = -1;
do
{
var buf = new byte[2048];
bytesRead = rStream.Read(buf, 0, buf.Length);
rStream.Flush();
this.SerializeFileChunk(buf);
}
while (bytesRead != 0);
}
private void CreateDir()
{
if (!Directory.Exists(dir))
{
Directory.CreateDirectory(dir);
}
}
private void SerializeFileChunk(byte[] bytes)
{
Contract.Requires(!Object.ReferenceEquals(bytes, null));
FileStream fs = File.Open(filePath, FileMode.Append);
fs.Write(bytes, 0, bytes.Length);
fs.Flush();
fs.Close();
}
}
}
For testing purposes, I've used the following parameters:
"http://itu.dk/people/janv/mufc_abc.jpg" "mufc_abc" "jpg"
However, the picture is incomplete (only the first ~10% look right) even though the content length prints 63780 which is the actual size of the image.
So my questions are:
Is this the right way to go for partial download and serialization or is there a better/easier approach?
Is the full content of the response stream stored in client memory? If this is the case, do I need to use HttpWebRequest.AddRange to partially download data from the server in order to conserve my client's memory?
How come the serialization fails and I get a broken image?
Do I introduce a lot of overhead when I use the FileMode.Append? (msdn states that this option "seeks to the end of the file")
Thanks in advance
You could definitely simplify your code using a WebClient:
class Program
{
static void Main()
{
DownloadClient("http://itu.dk/people/janv/mufc_abc.jpg", "mufc_abc.jpg");
}
public static void DownloadClient(string uri, string fileName)
{
using (var client = new WebClient())
{
using (var stream = client.OpenRead(uri))
{
// work with chunks of 2KB => adjust if necessary
const int chunkSize = 2048;
var buffer = new byte[chunkSize];
using (var output = File.OpenWrite(fileName))
{
int bytesRead;
while ((bytesRead = stream.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write(buffer, 0, bytesRead);
}
}
}
}
}
}
Notice how I am writing only the number of bytes I have actually read from the socket to the output file and not the entire 2KB buffer.
I don't know if this is the source of the problem, however I would change the loop like this
const int ChunkSize = 2048;
var buf = new byte[ChunkSize];
var rStream = response.GetResponseStream();
do {
int bytesRead = rStream.Read(buf, 0, ChunkSize);
if (bytesRead > 0) {
this.SerializeFileChunk(buf, bytesRead);
}
} while (bytesRead == ChunkSize);
The serialize method would get an additional argument
private void SerializeFileChunk(byte[] bytes, int numBytes)
and then write the right number of bytes
fs.Write(bytes, 0, numBytes);
UPDATE:
I do not see the need for closing and reopening the file each time. I also would use the using statement, which closes the resources, even if an exception should occur. The using statement calls the Dispose() method of the resource at the end, which in turn calls Close() in the case of file streams. using can be applied to all types implementing IDisposable.
var buf = new byte[2048];
using (var rStream = response.GetResponseStream()) {
using (FileStream fs = File.Open(filePath, FileMode.Append)) {
do {
bytesRead = rStream.Read(buf, 0, buf.Length);
fs.Write(bytes, 0, bytesRead);
} while (...);
}
}
The using statement does something like this
{
var rStream = response.GetResponseStream();
try
{
// do some work with rStream here.
} finally {
if (rStream != null) {
rStream.Dispose();
}
}
}
Here is the solution from Microsoft: http://support.microsoft.com/kb/812406
Updated 2021-03-16: seems the original article is not available now. Here is the archived one: https://mskb.pkisolutions.com/kb/812406