I want to write my http response to a file but my file is always empty. Why?
using (Stream outputfile = File.OpenWrite(objecttype + ".txt"))
{
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = context.Response.OutputStream.Read(buffer, 0, buffer.Length)) > 0)
{
outputfile.Write(buffer, 0, bytesRead);
}
outputfile.Flush();
outputfile.Close();
}
Related
I have been building my own HTTP/1.0 proxy server using HTTPWebRequest/HTTPWebResponse. Part of the original code streamed the response from the remote server directly to the client like so:
if (response != null)
{
List<Tuple<String, String>> responseHeaders = ProcessResponse(response);
StreamWriter myResponseWriter = new StreamWriter(outStream);
Stream responseStream = response.GetResponseStream();
HttpStatusCode statusCode = response.StatusCode;
string statusDesc = response.StatusDescription;
Byte[] buffer;
if (response.ContentLength > 0)
{
buffer = new Byte[response.ContentLength];
}
else
{
buffer = new Byte[BUFFER_SIZE];
}
int bytesRead;
//send the response status and response headers
WriteResponseStatus(statusCode, statusDesc, myResponseWriter);
WriteResponseHeaders(myResponseWriter, responseHeaders);
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
// this is the response body being streamed directly to the client browser
outStream.Write(buffer, 0, bytesRead);
}
}
I have been trying to intercept the response body so that I can modify the contents, but before I can do this I need to decompress the contents if it has been gzip/br/deflate by the remote server. This is what I have come up with so far, but as you can see from my comments I just can't work out how to store the byte stream into one var so that I can then send it for decompression:
Byte[] buffer;
if (response.ContentLength > 0)
buffer = new Byte[response.ContentLength];
else
buffer = new Byte[BUFFER_SIZE];
int bytesRead;
var res = "";
// if the url and content type matches the criteria, then we want to edit it
if (hostPathMatch.Count > 0 && contentTypeMatch.Count > 0)
{
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
// how to we send this response stream to a var so that the entire contents can be sent to decompress
//res += UTF8Encoding.UTF8.GetString(buffer, 0, bytesRead); // this doesnt work as it mangles gzipped contents
}
//was the page compressed? check the content-encoding header.
if (responseHeaders.Any(p => p.Item1.ToLower() == "content-encoding" && p.Item2.ToLower() == "gzip"))
{
Output._Log.Output_Log("CONTENT IS GZIPPED");
res = Tools.Unzip(res); // expects byte[], returns UTF8
}
// THIS IS WHERE WE WILL MODIFY THE BODY CONTENTS
res = res.Replace("Foo", "Bar Bar");
// then we will re-compress
// update the response headers with the correct content length after modification
responseHeaders.RemoveAll(p => p.Item1 == "Content-Length");
responseHeaders.Add(new Tuple<string, string>("Content-Length", res.Length));
//send the response status and response headers
WriteResponseStatus(statusCode, statusDesc, myResponseWriter);
WriteResponseHeaders(myResponseWriter, responseHeaders);
}
else // we didnt want to modify this file, so just stream it out directly to the browser
{
//send the response status and response headers
WriteResponseStatus(statusCode, statusDesc, myResponseWriter);
WriteResponseHeaders(myResponseWriter, responseHeaders);
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
outStream.Write(buffer, 0, bytesRead);
}
}
For the case where you don't know the total size in advance, you could do something like following
[...]
List<byte> data = new List<byte>();
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
for(int i = 0; i < bytesRead; ++i)
data.Add(buffer[i]);
}
var bytes = data.ToArray();
[...]
Should you know it in advance you could use something like
[...]
int offset = 0;
byte[] data = new byte[TOTAL_SIZE];
while ((bytesRead = responseStream.Read(buffer, 0, buffer.Length)) > 0)
{
buffer.CopyTo(data, offset);
offset += bytesRead;
}
[...]
Note that nothing of the snippets above have been tested and some checks would have to be added (e.g. out-of-range, etc.). Furthermore, depending on the use cases it might not fit your needs (e.g. for huge data).
PS: DO NOT FORGET to dispose/cleanup all of the IDisposables.
I'm trying to capture an audio stream in CScore and save it in various encodings and to various locations. One of my intended output encodings is MP3 via the MediaFoundationEncoder APIs.
I am able to successfully encode to MP3 when saving to a local file path. However, if I try to write to a memory stream the memory stream completes writing with a 0 length.
What is wrong with this implementation?
Working Local Storage
var fileName = "c:\audio.mp3";
using (var encoder = MediaFoundationEncoder.CreateMP3Encoder(waveFormat, fileName, waveFormat.BytesPerSecond))
{
byte[] buffer = new byte[waveFormat.BytesPerSecond];
int read;
while ((read = inputStream.Read(buffer, 0, buffer.Length)) > 0)
{
encoder.Write(buffer, 0, read);
}
}
Non-working Memory Stream
using (var outputStream = new MemoryStream())
{
var encoder = MediaFoundationEncoder.CreateMP3Encoder(waveFormat, outputStream, waveFormat.BytesPerSecond);
var buffer = new byte[waveFormat.BytesPerSecond];
int read;
while ((read = inputStream.Read(buffer, 0, buffer.Length)) > 0)
{
encoder.Write(buffer, 0, read);
}
log.Debug("MP3 File Size: " + outputStream.Length); // <-- Returns as 0
}
You have to dispose the encoder after write, this completes the output stream.
public static byte[] EncodeBytes(byte[] bytes, WaveFormat waveFormat)
{
var outputStream = new MemoryStream();
var encoder = MediaFoundationEncoder.CreateMP3Encoder(waveFormat, outputStream, waveFormat.BytesPerSecond);
var buffer = new byte[waveFormat.BytesPerSecond];
var inputStream = new MemoryStream(bytes);
int read;
while ((read = inputStream.Read(buffer, 0, buffer.Length)) > 0)
{
encoder.Write(buffer, 0, read);
}
encoder.Dispose();
return outputStream.ToArray();
}
My requirement is to transfer a zip file of size 400MB or more; The following code works for at least 40MB; But for more I would have to change byte[] bytes = new byte[50000000]; to byte[] bytes = new byte[400000000]; and maxRequestLength to maxRequestLength="409600";
The problem is byte[] bytes = new byte[100000000]; returns an error regarding insufficient space. So how can I transfer large files using WebClient??
WebClient client = new WebClient();
client.AllowWriteStreamBuffering = true;
UriBuilder ub = new UriBuilder("http://localhost:57596/UploadImages.ashx");
ub.Query = "ImageName=" + "DataSet" + DataSetId + ".zip";
client.OpenWriteCompleted += (InputStream, eArguments) =>
{
try
{
using (Stream output = eArguments.Result)
{
output.Write(ImagesAux, 0, (int)ImagesAux.Length);
//numeroimagem++;
}
}
catch (Exception ex)
{
MessageBox.Show(ex.Message);
//throw;
}
};
client.OpenWriteAsync(ub.Uri);
in UploadImages.ashx
public void ProcessRequest(HttpContext context)
{
//context.Response.ContentType = "text/plain";
//context.Response.Write("Hello World");
string ImageName = context.Request.QueryString["ImageName"];
string UploadPath = context.Server.MapPath("~/ServerImages/");
using (FileStream stream = File.Create(UploadPath + ImageName))
{
byte[] bytes = new byte[50000000]; //
int bytesToRead = 0;
while ((bytesToRead =
context.Request.InputStream.Read(bytes, 0, bytes.Length)) != 0)
{
stream.Write(bytes, 0, bytesToRead);
stream.Close();
}
}
}
in Web.config
<httpRuntime targetFramework="4.5" maxRequestLength="40960"/>
You should never load everything in memory then write all back to disk, but instead you should load pieces and write them to disk while you are reading them.
When you've done reading you close the stream you are writing to.
Otherwise as soon as you reach sizes as GB you can get an OutOfMemory really quick.
So I would change the writing bytes to disk from this:
using (FileStream stream = File.Create(UploadPath + ImageName))
{
byte[] bytes = new byte[50000000]; //
int bytesToRead = 0;
while ((bytesToRead = context.Request.InputStream.Read(bytes, 0, bytes.Length)) != 0)
{
stream.Write(bytes, 0, bytesToRead);
stream.Close();
}
}
to this:
using (FileStream stream = File.Create(UploadPath + ImageName))
{
byte[] bytes = new byte[1024];
long totalBytes = context.Request.InputStream.Length;
long bytesRead = 0;
int bytesToRead = bytes.Length;
if (totalBytes - bytesRead < bytes.Length)
bytesToRead = (int)(totalBytes - bytesRead);
bytes = new byte[bytesToRead];
while ((bytesToRead = context.Request.InputStream.Read(bytes, bytesRead, bytes.Length)) != 0)
{
stream.Write(bytes, bytesRead, bytes.Length);
bytesRead += bytes.Length;
if (totalBytes - bytesRead < bytes.Length)
bytesToRead = (int)(totalBytes - bytesRead);
bytes = new byte[bytesToRead];
}
stream.Close();
}
1024 would be the buffer size.
i used the following code to download the file folder in asp.net website are
string path = #"E:\sample.zip";
FileInfo file = new FileInfo(path);
int len = (int)file.Length, bytes;
Response.ContentType = "text/html";
// Response.AddHeader "Content-Disposition", "attachment;filename=" + filename;
Response.AppendHeader("content-length", len.ToString());
byte[] buffer = new byte[1024];
using(Stream stream = File.OpenRead(path)) {
while (len > 0 && (bytes =
stream.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, bytes);
len -= bytes;
}
}
it works fine/...
but my problem is when i used the same code to the library file as
FileInfo file = new FileInfo(ZipPath);
int len = (int)file.Length, bytes;
HttpResponse Response = new HttpResponse(TextWriter.Null);
Response.ContentType = "text/html";
Response.AppendHeader("content-length", len.ToString());
byte[] buffer = new byte[1024];
using (Stream stream = File.OpenRead(ZipPath))
{
while (len > 0 && (bytes =
stream.Read(buffer, 0, buffer.Length)) > 0)
{
Response.OutputStream.Write(buffer, 0, bytes);
len -= bytes;
}
}
}
it throws me a error as
OutputStream is not available when a custom TextWriter is used.
i guess the problem is in that line
HttpResponse Response = new HttpResponse(TextWriter.Null);
can you provide me a solution
waiting for your responses....
You can try with this code
TextWriter sw = new StringWriter();
HttpResponse Response = new HttpResponse(sw);
i replaced the structures as
HttpResponse response = HttpContext.Current.Response;
it works fine.....
Thanks all for your support
With help of this question C# 4.0: Convert pdf to byte[] and vice versa i was able to convert byte[] to PDF. Byte array length is 25990 approx. When i try to open the PDF it says file is corrupted. What could be the reason?
I tried the BinaryWriter but it creates PDF of 0 KB.
It's a response from a Web Service
Sample Code
WebResponse resp = request.GetResponse();
var buffer = new byte[4096];
Stream responseStream = resp.GetResponseStream();
{
int count;
do
{
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, responseStream.Read(buffer, 0, buffer.Length));
} while (count != 0);
}
resp.Close();
byte[] memoryBuffer = memoryStream.ToArray();
System.IO.File.WriteAllBytes(#"E:\sample1.pdf", memoryBuffer);
int s = memoryBuffer.Length;
BinaryWriter binaryWriter = new BinaryWriter(File.Open(#"E:\sample2.pdf", FileMode.Create));
binaryWriter.Write(memoryBuffer);
You are reading twice from the stream but only writing one buffer. Change this:
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, responseStream.Read(buffer, 0, buffer.Length));
To this:
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, count);
It seems your missing some bytes there because you have one unnecessary read. Try this:
do
{
count = responseStream.Read(buffer, 0, buffer.Length);
memoryStream.Write(buffer, 0, count);
} while (count != 0);