I get an Out of Memory Exception when using Http.Put of a large file. I am using an asynchronous model as shown in the code. I am trying to send 8K blocks of data to a Windows 2008 R2 server. The exception consistently occurs when I attempt to write a block of data that exceeds 536,868,864 bytes. The exception occurs on the requestStream.Write method in the code snippet below.
Looking for reasons why?
Note: Smaller files are PUT OK. Logic also works if I write to a local FileStream. Running VS 2010, .Net 4.0 on Win 7 Ultimate client computer.
HttpWebRequest request = (HttpWebRequest)WebRequest.Create("Http://website/FileServer/filename");
request.Method = WebRequestMethods.Http.Put;
request.SendChunked = true;
request.AllowWriteStreamBuffering = true;
...
request.BeginGetRequestStream( new AsyncCallback(EndGetStreamCallback), state);
...
int chunk = 8192; // other values give same result
....
private static void EndGetStreamCallback(IAsyncResult ar) {
long limit = 0;
long fileLength;
HttpState state = (HttpState)ar.AsyncState;
Stream requestStream = null;
// End the asynchronous call to get the request stream.
try {
requestStream = state.Request.EndGetRequestStream(ar);
// Copy the file contents to the request stream.
FileStream stream = new FileStream(state.FileName, FileMode.Open, FileAccess.Read, FileShare.None, chunk, FileOptions.SequentialScan);
BinaryReader binReader = new BinaryReader(stream);
fileLength = stream.Length;
// Set Position to the beginning of the stream.
binReader.BaseStream.Position = 0;
byte[] fileContents = new byte[chunk];
// Read File from Buffer
while (limit < fileLength)
{
fileContents = binReader.ReadBytes(chunk);
// the next 2 lines attempt to write to network and server
requestStream.Write(fileContents, 0, chunk); // causes Out of memory after 536,868,864 bytes
requestStream.Flush(); // I get same result with or without Flush
limit += chunk;
}
// IMPORTANT: Close the request stream before sending the request.
stream.Close();
requestStream.Close();
}
}
You apparently have this documented problem. When AllowWriteStreamBuffering is true, it buffers all the data written to the request! So, the "solution" is to set that property to false:
To work around this issue, set the HttpWebRequest.AllowWriteStreamBuffering property to false.
Related
I'm trying to get an image from an url using a byte stream. But i get this error message:
This stream does not support seek operations.
This is my code:
byte[] b;
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
Stream stream = myResp.GetResponseStream();
int i;
using (BinaryReader br = new BinaryReader(stream))
{
i = (int)(stream.Length);
b = br.ReadBytes(i); // (500000);
}
myResp.Close();
return b;
What am i doing wrong guys?
You probably want something like this. Either checking the length fails, or the BinaryReader is doing seeks behind the scenes.
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
byte[] b = null;
using( Stream stream = myResp.GetResponseStream() )
using( MemoryStream ms = new MemoryStream() )
{
int count = 0;
do
{
byte[] buf = new byte[1024];
count = stream.Read(buf, 0, 1024);
ms.Write(buf, 0, count);
} while(stream.CanRead && count > 0);
b = ms.ToArray();
}
edit:
I checked using reflector, and it is the call to stream.Length that fails. GetResponseStream returns a ConnectStream, and the Length property on that class throws the exception that you saw. As other posters mentioned, you cannot reliably get the length of a HTTP response, so that makes sense.
Use a StreamReader instead:
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
StreamReader reader = new StreamReader(myResp.GetResponseStream());
return reader.ReadToEnd();
(Note - the above returns a String instead of a byte array)
You can't reliably ask an HTTP connection for its length. It's possible to get the server to send you the length in advance, but (a) that header is often missing and (b) it's not guaranteed to be correct.
Instead you should:
Create a fixed-length byte[] that you pass to the Stream.Read method
Create a List<byte>
After each read, call List.AddRange to append the contents of your fixed-length buffer onto your byte list
Note that the last call to Read will return fewer than the full number of bytes you asked for. Make sure you only append that number of bytes onto your List<byte> and not the whole byte[], or you'll get garbage at the end of your list.
If the server doesn't send a length specification in the HTTP header, the stream size is unknown, so you get the error when trying to use the Length property.
Read the stream in smaller chunks, until you reach the end of the stream.
With images, you don't need to read the number of bytes at all. Just do this:
Image img = null;
string path = "http://www.example.com/image.jpg";
WebRequest request = WebRequest.Create(path);
req.Credentials = CredentialCache.DefaultCredentials; // in case your URL has Windows auth
WebResponse resp = req.GetResponse();
using( Stream stream = resp.GetResponseStream() )
{
img = Image.FromStream(stream);
// then use the image
}
Perhaps you should use the System.Net.WebClient API. If already using client.OpenRead(url) use client.DownloadData(url)
var client = new System.Net.WebClient();
byte[] buffer = client.DownloadData(url);
using (var stream = new MemoryStream(buffer))
{
... your code using the stream ...
}
Obviously this downloads everything before the Stream is created, so it may defeat the purpose of using a Stream. webClient.DownloadData("https://your.url") gets a byte array which you can then turn into a MemoryStream.
The length of a stream can not be read from the stream since the receiver does not know how many bytes the sender will send. Try to put a protocol on top of http and send i.e. the length as first item in the stream.
I use the following code to return a byte array in HttpResponseMessage:
using (WebResponse response = (HttpWebResponse)request.GetResponse())
{
byte[] bytes = ReadFully(response.GetResponseStream());
......
}
public static byte[] ReadFully(Stream input)
{
byte[] buffer = new byte[16*1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray(); // This line throws OutOfMemory exception
}
}
An OutOfMemory exception is thrown in the last return ms.ToArray() statement.
I need to set the resulting byte[] as HttpResponseMessage.Content.
You should return the stream directly instead of reading it into memory first.
public HttpResponseMessage CreateMessage(Stream input)
{
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
result.Content = new StreamContent(input);
return result;
}
Do not forget to set the appropriate headers etc.
Edit
... i need to write the byte array from the HttpResponseMessage into a file
Based on your last comment you changed your question and want to go the other way. Here is an example of writing to a file from a web response.
public void writetoFile(HttpWebResponse response)
{
var inStream = response.GetResponseStream();
using (var file = System.IO.File.OpenWrite("your file path here"))
{
inStream.CopyTo(file);
}
}
Igor posted the solution, and the correct way to deal with stream content. Use one of the MVC helper functions like File(stream,contentype) or classes like StreamContent to send the stream contents directly to the client, eg:
return File(myStream,myExcelContentTypeString);
or
return File(myStream,myExcelContentTypeString,"ReallyBigFile.xlsx");
The reason for the error, is that OOM can occur because memory is too fragmented to allocate a new object. A MemoryStream stores data in a buffer. When it exceeds the buffer limits, it allocates a new one with double the capacity and copies the old data. Copying 250MB of data like this is going to cause a lot of reallocations and thus a lot of memory fragmentation.
This can be avoided by specifying the desired capacity in the stream's constructor. This will allocate a large enough buffer immediatelly.
It's even better though to avoid caching this content though, by sending it to the browser directly.
I am getting data from client and saving it to the local drive on local host .I have checked it for a file of 221MB but a test for file of 1Gb gives the following exception:
An unhandled exception of type 'System.OutOfMemoryException' occurred in mscorlib.dll
Following is the code at server side where exception stems out.
UPDATED
Server:
public void Thread()
{
TcpListener tcpListener = new TcpListener(ipaddr, port);
tcpListener.Start();
MessageBox.Show("Listening on port" + port);
TcpClient client=new TcpClient();
int bufferSize = 1024;
NetworkStream netStream;
int bytesRead = 0;
int allBytesRead = 0;
// Start listening
tcpListener.Start();
// Accept client
client = tcpListener.AcceptTcpClient();
netStream = client.GetStream();
// Read length of incoming data to reserver buffer for it
byte[] length = new byte[4];
bytesRead = netStream.Read(length, 0, 4);
int dataLength = BitConverter.ToInt32(length,0);
// Read the data
int bytesLeft = dataLength;
byte[] data = new byte[dataLength];
while (bytesLeft > 0)
{
int nextPacketSize = (bytesLeft > bufferSize) ? bufferSize : bytesLeft;
bytesRead = netStream.Read(data, allBytesRead, nextPacketSize);
allBytesRead += bytesRead;
bytesLeft -= bytesRead;
}
// Save to desktop
File.WriteAllBytes(#"D:\LALA\Miscellaneous\" + shortFileName, data);
// Clean up
netStream.Close();
client.Close();
}
I am getting the file size first from client side followed by data.
1).Should i increase the buffer size or any other technique ?
2). File.WriteAllBytes() and File.ReadAllBytes() seems blocking and freezes the PC.Is there any async method for it to help provide the progress of file recieved at server side.
You don't need to read the whole thing to memory before writing it to disc. Just copy straight from the network stream to a FileStream:
byte[] length = new byte[4];
// TODO: Validate that bytesRead is 4 after this... it's unlikely but *possible*
// that you might not read the whole length in one go.
bytesRead = netStream.Read(length, 0, 4);
int bytesLeft = BitConverter.ToInt32(length,0);
using (var output = File.Create(#"D:\Javed\Miscellaneous\" + shortFileName))
{
netStream.CopyTo(output, bytesLeft);
}
Note that instead of calling netStream.Close() explicitly, you should use a using statement:
using (Stream netStream = ...)
{
// Read from it
}
That way the stream will be closed even if an exception is thrown.
The CLR has a per-object limit a bit short of 2GB. However that's the theory, in practice how much memory you can allocate depends on how much memory the framework allows you to allocate. I wouldn't expect it to allow you to allocate 1 GB data table. You should allocate smaller table, and write the data in chunks into disk file.
The "out of memory" exception happens because you are trying to place the entire file into memory before dumping it on disk. This is suboptimal, because you don't need the entire file in memory in order to write into the file: you can read it block-by-block in reasonably-sized increments, and write it out as you go.
Starting with .NET 4.0 you can use Stream.CopyTo method to accomplish this in a few lines of code:
// Read and ignore the initial four bytes of length from the stream
byte[] ignore = new byte[4];
int bytesRead = 0;
do {
// This should complete in a single call, but the API requires you
// to do it in a loop.
bytesRead += netStream.Read(ignore, bytesRead, 4-bytesRead);
} while (bytesRead != 4);
// Copy the rest of the stream to a file
using (var fs = new FileStream(#"D:\Javed\Miscellaneous\" + shortFileName, FileMode.Create)) {
netStream.CopyTo(fs);
}
netStream.Close();
Starting with .NET 4.5 you can use CopyToAsync, too, which would give you a way to do reading and writing asynchronously.
Note the code that drops the initial four bytes from the stream. This is done to avoid writing the length of the stream along with the "payload" bytes. If you have control over the network protocol, you could change the sending side to stop prefixing the stream with its length, and remove the code that reads and ignores it on the receiving side.
The issue is as follows, I am using an HttpWebRequest to request some online data from dmo.gov.uk. The response I am reading using a BinaryReader and writing to a MemoryStream. I have packaged the code being used into a simple test method:
public static byte[] Test(int bufferSize)
{
var request = (HttpWebRequest)WebRequest.Create("http://www.dmo.gov.uk/xmlData.aspx?rptCode=D3B.2");
request.Method = "GET";
request.Credentials = CredentialCache.DefaultCredentials;
var buffer = new byte[bufferSize];
using (var httpResponse = (HttpWebResponse)request.GetResponse())
{
using (var ms = new MemoryStream())
{
using (var reader = new BinaryReader(httpResponse.GetResponseStream()))
{
int bytesRead;
while ((bytesRead = reader.Read(buffer, 0, bufferSize)) > 0)
{
ms.Write(buffer, 0, bytesRead);
}
}
return ms.GetBuffer();
}
}
}
My real-life code uses a buffer size of 2048 bytes usually, however I noticed today that this file has a huge amount of empty bytes (\0) at the end which bloats the file size. As a test I tried increasing the buffer size to near-on the file size I expected (I was expecting ~80Kb so made the buffer size 79000) and now I get the right file size. But I'm confused, I expected to get the same file size regardless of the buffer size used to read the data.
The following test:
Console.WriteLine(Test(2048).Length);
Console.WriteLine(Test(79000).Length);
Console.ReadLine();
Yields the follwoing output:
131072
81341
The second figure, using the high buffer size is the exact file size I was expecting (This file changes daily, so expect that size to differ after today's date). The first figure contains \0 for everything after the file size expected.
What's going on here?
You should change ms.GetBuffer(); to ms.ToArray();.
GetBuffer will return the entire MemoryStream buffer while ToArray will return all the values inside the MemoryStream.
I'm trying to get an image from an url using a byte stream. But i get this error message:
This stream does not support seek operations.
This is my code:
byte[] b;
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
Stream stream = myResp.GetResponseStream();
int i;
using (BinaryReader br = new BinaryReader(stream))
{
i = (int)(stream.Length);
b = br.ReadBytes(i); // (500000);
}
myResp.Close();
return b;
What am i doing wrong guys?
You probably want something like this. Either checking the length fails, or the BinaryReader is doing seeks behind the scenes.
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
byte[] b = null;
using( Stream stream = myResp.GetResponseStream() )
using( MemoryStream ms = new MemoryStream() )
{
int count = 0;
do
{
byte[] buf = new byte[1024];
count = stream.Read(buf, 0, 1024);
ms.Write(buf, 0, count);
} while(stream.CanRead && count > 0);
b = ms.ToArray();
}
edit:
I checked using reflector, and it is the call to stream.Length that fails. GetResponseStream returns a ConnectStream, and the Length property on that class throws the exception that you saw. As other posters mentioned, you cannot reliably get the length of a HTTP response, so that makes sense.
Use a StreamReader instead:
HttpWebRequest myReq = (HttpWebRequest)WebRequest.Create(url);
WebResponse myResp = myReq.GetResponse();
StreamReader reader = new StreamReader(myResp.GetResponseStream());
return reader.ReadToEnd();
(Note - the above returns a String instead of a byte array)
You can't reliably ask an HTTP connection for its length. It's possible to get the server to send you the length in advance, but (a) that header is often missing and (b) it's not guaranteed to be correct.
Instead you should:
Create a fixed-length byte[] that you pass to the Stream.Read method
Create a List<byte>
After each read, call List.AddRange to append the contents of your fixed-length buffer onto your byte list
Note that the last call to Read will return fewer than the full number of bytes you asked for. Make sure you only append that number of bytes onto your List<byte> and not the whole byte[], or you'll get garbage at the end of your list.
If the server doesn't send a length specification in the HTTP header, the stream size is unknown, so you get the error when trying to use the Length property.
Read the stream in smaller chunks, until you reach the end of the stream.
With images, you don't need to read the number of bytes at all. Just do this:
Image img = null;
string path = "http://www.example.com/image.jpg";
WebRequest request = WebRequest.Create(path);
req.Credentials = CredentialCache.DefaultCredentials; // in case your URL has Windows auth
WebResponse resp = req.GetResponse();
using( Stream stream = resp.GetResponseStream() )
{
img = Image.FromStream(stream);
// then use the image
}
Perhaps you should use the System.Net.WebClient API. If already using client.OpenRead(url) use client.DownloadData(url)
var client = new System.Net.WebClient();
byte[] buffer = client.DownloadData(url);
using (var stream = new MemoryStream(buffer))
{
... your code using the stream ...
}
Obviously this downloads everything before the Stream is created, so it may defeat the purpose of using a Stream. webClient.DownloadData("https://your.url") gets a byte array which you can then turn into a MemoryStream.
The length of a stream can not be read from the stream since the receiver does not know how many bytes the sender will send. Try to put a protocol on top of http and send i.e. the length as first item in the stream.