Error "This stream does not support seek operations" in C# - c#

I'm trying to get an image from an url using a byte stream. But i get this error message:
This stream does not support seek operations.
This is my code:
byte[] b;
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
Stream stream = myResp.GetResponseStream();
int i;
using (BinaryReader br = new BinaryReader(stream))
{
i = (int)(stream.Length);
b = br.ReadBytes(i); // (500000);
}
myResp.Close();
return b;
What am i doing wrong guys?

You probably want something like this. Either checking the length fails, or the BinaryReader is doing seeks behind the scenes.
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
byte[] b = null;
using( Stream stream = myResp.GetResponseStream() )
using( MemoryStream ms = new MemoryStream() )
{
int count = 0;
do
{
byte[] buf = new byte[1024];
count = stream.Read(buf, 0, 1024);
ms.Write(buf, 0, count);
} while(stream.CanRead && count > 0);
b = ms.ToArray();
}
edit:
I checked using reflector, and it is the call to stream.Length that fails. GetResponseStream returns a ConnectStream, and the Length property on that class throws the exception that you saw. As other posters mentioned, you cannot reliably get the length of a HTTP response, so that makes sense.

Use a StreamReader instead:
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
StreamReader reader = new StreamReader(myResp.GetResponseStream());
return reader.ReadToEnd();
(Note - the above returns a String instead of a byte array)

You can't reliably ask an HTTP connection for its length. It's possible to get the server to send you the length in advance, but (a) that header is often missing and (b) it's not guaranteed to be correct.
Instead you should:
Create a fixed-length byte[] that you pass to the Stream.Read method
Create a List<byte>
After each read, call List.AddRange to append the contents of your fixed-length buffer onto your byte list
Note that the last call to Read will return fewer than the full number of bytes you asked for. Make sure you only append that number of bytes onto your List<byte> and not the whole byte[], or you'll get garbage at the end of your list.

If the server doesn't send a length specification in the HTTP header, the stream size is unknown, so you get the error when trying to use the Length property.
Read the stream in smaller chunks, until you reach the end of the stream.

With images, you don't need to read the number of bytes at all. Just do this:
Image img = null;
string path = "http://www.example.com/image.jpg";
WebRequest request = WebRequest.Create(path);
req.Credentials = CredentialCache.DefaultCredentials; // in case your URL has Windows auth
WebResponse resp = req.GetResponse();
using( Stream stream = resp.GetResponseStream() )
{
img = Image.FromStream(stream);
// then use the image
}

Perhaps you should use the System.Net.WebClient API. If already using client.OpenRead(url) use client.DownloadData(url)
var client = new System.Net.WebClient();
byte[] buffer = client.DownloadData(url);
using (var stream = new MemoryStream(buffer))
{
... your code using the stream ...
}
Obviously this downloads everything before the Stream is created, so it may defeat the purpose of using a Stream. webClient.DownloadData("https://your.url") gets a byte array which you can then turn into a MemoryStream.

The length of a stream can not be read from the stream since the receiver does not know how many bytes the sender will send. Try to put a protocol on top of http and send i.e. the length as first item in the stream.

Related

Read the response from a web service [duplicate]

I'm trying to get an image from an url using a byte stream. But i get this error message:
This stream does not support seek operations.
This is my code:
byte[] b;
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
Stream stream = myResp.GetResponseStream();
int i;
using (BinaryReader br = new BinaryReader(stream))
{
i = (int)(stream.Length);
b = br.ReadBytes(i); // (500000);
}
myResp.Close();
return b;
What am i doing wrong guys?
You probably want something like this. Either checking the length fails, or the BinaryReader is doing seeks behind the scenes.
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
byte[] b = null;
using( Stream stream = myResp.GetResponseStream() )
using( MemoryStream ms = new MemoryStream() )
{
int count = 0;
do
{
byte[] buf = new byte[1024];
count = stream.Read(buf, 0, 1024);
ms.Write(buf, 0, count);
} while(stream.CanRead && count > 0);
b = ms.ToArray();
}
edit:
I checked using reflector, and it is the call to stream.Length that fails. GetResponseStream returns a ConnectStream, and the Length property on that class throws the exception that you saw. As other posters mentioned, you cannot reliably get the length of a HTTP response, so that makes sense.
Use a StreamReader instead:
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
StreamReader reader = new StreamReader(myResp.GetResponseStream());
return reader.ReadToEnd();
(Note - the above returns a String instead of a byte array)
You can't reliably ask an HTTP connection for its length. It's possible to get the server to send you the length in advance, but (a) that header is often missing and (b) it's not guaranteed to be correct.
Instead you should:
Create a fixed-length byte[] that you pass to the Stream.Read method
Create a List<byte>
After each read, call List.AddRange to append the contents of your fixed-length buffer onto your byte list
Note that the last call to Read will return fewer than the full number of bytes you asked for. Make sure you only append that number of bytes onto your List<byte> and not the whole byte[], or you'll get garbage at the end of your list.
If the server doesn't send a length specification in the HTTP header, the stream size is unknown, so you get the error when trying to use the Length property.
Read the stream in smaller chunks, until you reach the end of the stream.
With images, you don't need to read the number of bytes at all. Just do this:
Image img = null;
string path = "http://www.example.com/image.jpg";
WebRequest request = WebRequest.Create(path);
req.Credentials = CredentialCache.DefaultCredentials; // in case your URL has Windows auth
WebResponse resp = req.GetResponse();
using( Stream stream = resp.GetResponseStream() )
{
img = Image.FromStream(stream);
// then use the image
}
Perhaps you should use the System.Net.WebClient API. If already using client.OpenRead(url) use client.DownloadData(url)
var client = new System.Net.WebClient();
byte[] buffer = client.DownloadData(url);
using (var stream = new MemoryStream(buffer))
{
... your code using the stream ...
}
Obviously this downloads everything before the Stream is created, so it may defeat the purpose of using a Stream. webClient.DownloadData("https://your.url") gets a byte array which you can then turn into a MemoryStream.
The length of a stream can not be read from the stream since the receiver does not know how many bytes the sender will send. Try to put a protocol on top of http and send i.e. the length as first item in the stream.

Calculate percent of bytes read from compressed HTTP response stream

In my desktop application I process HTTP response from the server and I want to provide a progress of processed bytes relatively to the total response length.
I know the content length from the HTTP header but the problem is that a compression (gzip) was applied for the response body. I can calculate the number of processed bytes but they are after decompression and that number is different from the content-length. The HTTP response's Stream is not seekable and to determine the progress I cannot use its Position property as I will have a NotSupportedException, similar to what MSDN declares for NetworkStream.Position property.
Below is the code that reads from the gzipped response
HttpClient httpClient = new HttpClient();
HttpResponseMessage response = httpClient.GetAsync(gzipGetUri, HttpCompletionOption.ResponseHeadersRead).Result;
long? contentLength = response.Content.Headers.ContentLength;
long totalCount = 0;
int percentDone = 0;
Stream responseStream = response.Content.ReadAsStreamAsync().Result;
Stream gzipStream = new GZipStream(responseStream, CompressionMode.Decompress);
byte[] buffer = new byte[1024];
while (true)
{
int nBytes = gzipStream.Read(buffer, 0, buffer.Length);
if (nBytes <= 0)
break;
// process decompressed bytes here...
totalCount += nBytes;
// This code throws NotSupportedException: This stream does not support seek operations
percentDone = (int)(((double)responseStream.Position / contentLength) * 100);
}
Console.WriteLine($"content-length: {contentLength}; bytes read from gzipStream: {totalCount}");
The output to Console (when the line with calculation of percentDone is commented out) is
content-length: 1,316,578; bytes read from gzipStream: 9,410,402
My question is how I can determine the number of bytes that were consumed from a non-seekable response stream before they are transformed by decompression. Also I cannot use the count after decompression for percentDone calculation because I do not know the final number of decompressed bytes.
I guess that I can derive a class from Stream that counts passing through bytes, use it as a wrapper around responseStream and pass it as an inner stream to gzipStream but that solution seems too heavy.

BinaryReader reading different length of data depending on BufferSize

The issue is as follows, I am using an HttpWebRequest to request some online data from dmo.gov.uk. The response I am reading using a BinaryReader and writing to a MemoryStream. I have packaged the code being used into a simple test method:
public static byte[] Test(int bufferSize)
{
var request = (HttpWebRequest)WebRequest.Create("http://www.dmo.gov.uk/xmlData.aspx?rptCode=D3B.2");
request.Method = "GET";
request.Credentials = CredentialCache.DefaultCredentials;
var buffer = new byte[bufferSize];
using (var httpResponse = (HttpWebResponse)request.GetResponse())
{
using (var ms = new MemoryStream())
{
using (var reader = new BinaryReader(httpResponse.GetResponseStream()))
{
int bytesRead;
while ((bytesRead = reader.Read(buffer, 0, bufferSize)) > 0)
{
ms.Write(buffer, 0, bytesRead);
}
}
return ms.GetBuffer();
}
}
}
My real-life code uses a buffer size of 2048 bytes usually, however I noticed today that this file has a huge amount of empty bytes (\0) at the end which bloats the file size. As a test I tried increasing the buffer size to near-on the file size I expected (I was expecting ~80Kb so made the buffer size 79000) and now I get the right file size. But I'm confused, I expected to get the same file size regardless of the buffer size used to read the data.
The following test:
Console.WriteLine(Test(2048).Length);
Console.WriteLine(Test(79000).Length);
Console.ReadLine();
Yields the follwoing output:
131072
81341
The second figure, using the high buffer size is the exact file size I was expecting (This file changes daily, so expect that size to differ after today's date). The first figure contains \0 for everything after the file size expected.
What's going on here?
You should change ms.GetBuffer(); to ms.ToArray();.
GetBuffer will return the entire MemoryStream buffer while ToArray will return all the values inside the MemoryStream.

C# HttpListener Response + GZipStream

I use HttpListener for my own http server (I do not use IIS). I want to compress my OutputStream by GZip compression:
byte[] refBuffer = Encoding.UTF8.GetBytes(...some data source...);
var varByteStream = new MemoryStream(refBuffer);
System.IO.Compression.GZipStream refGZipStream = new GZipStream(varByteStream, CompressionMode.Compress, false);
refGZipStream.BaseStream.CopyTo(refHttpListenerContext.Response.OutputStream);
refHttpListenerContext.Response.AddHeader("Content-Encoding", "gzip");
But I getting error in Chrome:
ERR_CONTENT_DECODING_FAILED
If I remove AddHeader, then it works, but the size of response is not seems being compressed. What am I doing wrong?
The problem is that your transfer is going in the wrong direction. What you want to do is attach the GZipStream to the Response.OutputStream and then call CopyTo on the MemoryStream, passing in the GZipStream, like so:
refHttpListenerContext.Response.AddHeader("Content-Encoding", "gzip");
byte[] refBuffer = Encoding.UTF8.GetBytes(...some data source...);
var varByteStream = new MemoryStream(refBuffer);
System.IO.Compression.GZipStream refGZipStream = new GZipStream(refHttpListenerContext.Response.OutputStream, CompressionMode.Compress, false);
varByteStream.CopyTo(refGZipStream);
refGZipStream.Flush();
The first problem (as mentioned by Brent M Spell) is the wrong position of the header. The second is that you don't use properly the GZipStream. This stream requires a "top" stream to write to, meaning an empty stream (you fill it with your buffer). Having an empty "top" stream then all you have to do is to write on GZipStream your buffer. As a result the memory stream will be filled by the compressed content. So you need something like:
byte[] buffer = ....;
using (var ms = new MemoryStream())
{
using (var zip = new GZipStream(ms, CompressionMode.Compress, true))
zip.Write(buffer, 0, buffer.Length);
buffer = ms.ToArray();
}
response.AddHeader("Content-Encoding", "gzip");
response.ContentLength64 = buffer.Length;
response.OutputStream.Write(buffer, 0, buffer.Length);
Hopeful this might help, they discuss how to get GZIP working.
Sockets in C#: How to get the response stream?

Out of Memory Exception When Using HttpWebRequest to Stream Large File

I get an Out of Memory Exception when using Http.Put of a large file. I am using an asynchronous model as shown in the code. I am trying to send 8K blocks of data to a Windows 2008 R2 server. The exception consistently occurs when I attempt to write a block of data that exceeds 536,868,864 bytes. The exception occurs on the requestStream.Write method in the code snippet below.
Looking for reasons why?
Note: Smaller files are PUT OK. Logic also works if I write to a local FileStream. Running VS 2010, .Net 4.0 on Win 7 Ultimate client computer.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("Http://website/FileServer/filename");
request.Method = WebRequestMethods.Http.Put;
request.SendChunked = true;
request.AllowWriteStreamBuffering = true;
...
request.BeginGetRequestStream( new AsyncCallback(EndGetStreamCallback), state);
...
int chunk = 8192; // other values give same result
....
private static void EndGetStreamCallback(IAsyncResult ar) {
long limit = 0;
long fileLength;
HttpState state = (HttpState)ar.AsyncState;
Stream requestStream = null;
// End the asynchronous call to get the request stream.
try {
requestStream = state.Request.EndGetRequestStream(ar);
// Copy the file contents to the request stream.
FileStream stream = new FileStream(state.FileName, FileMode.Open, FileAccess.Read, FileShare.None, chunk, FileOptions.SequentialScan);
BinaryReader binReader = new BinaryReader(stream);
fileLength = stream.Length;
// Set Position to the beginning of the stream.
binReader.BaseStream.Position = 0;
byte[] fileContents = new byte[chunk];
// Read File from Buffer
while (limit < fileLength)
{
fileContents = binReader.ReadBytes(chunk);
// the next 2 lines attempt to write to network and server
requestStream.Write(fileContents, 0, chunk); // causes Out of memory after 536,868,864 bytes
requestStream.Flush(); // I get same result with or without Flush
limit += chunk;
}
// IMPORTANT: Close the request stream before sending the request.
stream.Close();
requestStream.Close();
}
}
You apparently have this documented problem. When AllowWriteStreamBuffering is true, it buffers all the data written to the request! So, the "solution" is to set that property to false:
To work around this issue, set the HttpWebRequest.AllowWriteStreamBuffering property to false.

Categories

Resources