OutOfMemory Exception while converting MemoryStream to Array - c#

I use the following code to return a byte array in HttpResponseMessage:
using (WebResponse response = (HttpWebResponse)request.GetResponse())
{
byte[] bytes = ReadFully(response.GetResponseStream());
......
}
public static byte[] ReadFully(Stream input)
{
byte[] buffer = new byte[16*1024];
using (MemoryStream ms = new MemoryStream())
{
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
ms.Write(buffer, 0, read);
}
return ms.ToArray(); // This line throws OutOfMemory exception
}
}
An OutOfMemory exception is thrown in the last return ms.ToArray() statement.
I need to set the resulting byte[] as HttpResponseMessage.Content.

You should return the stream directly instead of reading it into memory first.
public HttpResponseMessage CreateMessage(Stream input)
{
HttpResponseMessage result = new HttpResponseMessage(HttpStatusCode.OK);
result.Content = new StreamContent(input);
return result;
}
Do not forget to set the appropriate headers etc.
Edit
... i need to write the byte array from the HttpResponseMessage into a file
Based on your last comment you changed your question and want to go the other way. Here is an example of writing to a file from a web response.
public void writetoFile(HttpWebResponse response)
{
var inStream = response.GetResponseStream();
using (var file = System.IO.File.OpenWrite("your file path here"))
{
inStream.CopyTo(file);
}
}

Igor posted the solution, and the correct way to deal with stream content. Use one of the MVC helper functions like File(stream,contentype) or classes like StreamContent to send the stream contents directly to the client, eg:
return File(myStream,myExcelContentTypeString);
or
return File(myStream,myExcelContentTypeString,"ReallyBigFile.xlsx");
The reason for the error, is that OOM can occur because memory is too fragmented to allocate a new object. A MemoryStream stores data in a buffer. When it exceeds the buffer limits, it allocates a new one with double the capacity and copies the old data. Copying 250MB of data like this is going to cause a lot of reallocations and thus a lot of memory fragmentation.
This can be avoided by specifying the desired capacity in the stream's constructor. This will allocate a large enough buffer immediatelly.
It's even better though to avoid caching this content though, by sending it to the browser directly.

Related

How to copy a large stream to another without OutOfMemoryException in C# [duplicate]

What is the best way to copy the contents of one stream to another? Is there a standard utility method for this?
From .NET 4.5 on, there is the Stream.CopyToAsync method
input.CopyToAsync(output);
This will return a Task that can be continued on when completed, like so:
await input.CopyToAsync(output)
// Code from here on will be run in a continuation.
Note that depending on where the call to CopyToAsync is made, the code that follows may or may not continue on the same thread that called it.
The SynchronizationContext that was captured when calling await will determine what thread the continuation will be executed on.
Additionally, this call (and this is an implementation detail subject to change) still sequences reads and writes (it just doesn't waste a threads blocking on I/O completion).
From .NET 4.0 on, there's is the Stream.CopyTo method
input.CopyTo(output);
For .NET 3.5 and before
There isn't anything baked into the framework to assist with this; you have to copy the content manually, like so:
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[32768];
int read;
while ((read = input.Read(buffer, 0, buffer.Length)) > 0)
{
output.Write (buffer, 0, read);
}
}
Note 1: This method will allow you to report on progress (x bytes read so far ...)
Note 2: Why use a fixed buffer size and not input.Length? Because that Length may not be available! From the docs:
If a class derived from Stream does not support seeking, calls to Length, SetLength, Position, and Seek throw a NotSupportedException.
MemoryStream has .WriteTo(outstream);
and .NET 4.0 has .CopyTo on normal stream object.
.NET 4.0:
instream.CopyTo(outstream);
I use the following extension methods. They have optimized overloads for when one stream is a MemoryStream.
public static void CopyTo(this Stream src, Stream dest)
{
int size = (src.CanSeek) ? Math.Min((int)(src.Length - src.Position), 0x2000) : 0x2000;
byte[] buffer = new byte[size];
int n;
do
{
n = src.Read(buffer, 0, buffer.Length);
dest.Write(buffer, 0, n);
} while (n != 0);
}
public static void CopyTo(this MemoryStream src, Stream dest)
{
dest.Write(src.GetBuffer(), (int)src.Position, (int)(src.Length - src.Position));
}
public static void CopyTo(this Stream src, MemoryStream dest)
{
if (src.CanSeek)
{
int pos = (int)dest.Position;
int length = (int)(src.Length - src.Position) + pos;
dest.SetLength(length);
while(pos < length)
pos += src.Read(dest.GetBuffer(), pos, length - pos);
}
else
src.CopyTo((Stream)dest);
}
.NET Framework 4 introduce new "CopyTo" method of Stream Class of System.IO namespace. Using this method we can copy one stream to another stream of different stream class.
Here is example for this.
FileStream objFileStream = File.Open(Server.MapPath("TextFile.txt"), FileMode.Open);
Response.Write(string.Format("FileStream Content length: {0}", objFileStream.Length.ToString()));
MemoryStream objMemoryStream = new MemoryStream();
// Copy File Stream to Memory Stream using CopyTo method
objFileStream.CopyTo(objMemoryStream);
Response.Write("<br/><br/>");
Response.Write(string.Format("MemoryStream Content length: {0}", objMemoryStream.Length.ToString()));
Response.Write("<br/><br/>");
There is actually, a less heavy-handed way of doing a stream copy. Take note however, that this implies that you can store the entire file in memory. Don't try and use this if you are working with files that go into the hundreds of megabytes or more, without caution.
public static void CopySmallTextStream(Stream input, Stream output)
{
using (StreamReader reader = new StreamReader(input))
using (StreamWriter writer = new StreamWriter(output))
{
writer.Write(reader.ReadToEnd());
}
}
NOTE: There may also be some issues concerning binary data and character encodings.
The basic questions that differentiate implementations of "CopyStream" are:
size of the reading buffer
size of the writes
Can we use more than one thread (writing while we are reading).
The answers to these questions result in vastly different implementations of CopyStream and are dependent on what kind of streams you have and what you are trying to optimize. The "best" implementation would even need to know what specific hardware the streams were reading and writing to.
Unfortunately, there is no really simple solution. You can try something like that:
Stream s1, s2;
byte[] buffer = new byte[4096];
int bytesRead = 0;
while (bytesRead = s1.Read(buffer, 0, buffer.Length) > 0) s2.Write(buffer, 0, bytesRead);
s1.Close(); s2.Close();
But the problem with that that different implementation of the Stream class might behave differently if there is nothing to read. A stream reading a file from a local harddrive will probably block until the read operaition has read enough data from the disk to fill the buffer and only return less data if it reaches the end of file. On the other hand, a stream reading from the network might return less data even though there are more data left to be received.
Always check the documentation of the specific stream class you are using before using a generic solution.
There may be a way to do this more efficiently, depending on what kind of stream you're working with. If you can convert one or both of your streams to a MemoryStream, you can use the GetBuffer method to work directly with a byte array representing your data. This lets you use methods like Array.CopyTo, which abstract away all the issues raised by fryguybob. You can just trust .NET to know the optimal way to copy the data.
if you want a procdure to copy a stream to other the one that nick posted is fine but it is missing the position reset, it should be
public static void CopyStream(Stream input, Stream output)
{
byte[] buffer = new byte[32768];
long TempPos = input.Position;
while (true)
{
int read = input.Read (buffer, 0, buffer.Length);
if (read <= 0)
return;
output.Write (buffer, 0, read);
}
input.Position = TempPos;// or you make Position = 0 to set it at the start
}
but if it is in runtime not using a procedure you shpuld use memory stream
Stream output = new MemoryStream();
byte[] buffer = new byte[32768]; // or you specify the size you want of your buffer
long TempPos = input.Position;
while (true)
{
int read = input.Read (buffer, 0, buffer.Length);
if (read <= 0)
return;
output.Write (buffer, 0, read);
}
input.Position = TempPos;// or you make Position = 0 to set it at the start
Since none of the answers have covered an asynchronous way of copying from one stream to another, here is a pattern that I've successfully used in a port forwarding application to copy data from one network stream to another. It lacks exception handling to emphasize the pattern.
const int BUFFER_SIZE = 4096;
static byte[] bufferForRead = new byte[BUFFER_SIZE];
static byte[] bufferForWrite = new byte[BUFFER_SIZE];
static Stream sourceStream = new MemoryStream();
static Stream destinationStream = new MemoryStream();
static void Main(string[] args)
{
// Initial read from source stream
sourceStream.BeginRead(bufferForRead, 0, BUFFER_SIZE, BeginReadCallback, null);
}
private static void BeginReadCallback(IAsyncResult asyncRes)
{
// Finish reading from source stream
int bytesRead = sourceStream.EndRead(asyncRes);
// Make a copy of the buffer as we'll start another read immediately
Array.Copy(bufferForRead, 0, bufferForWrite, 0, bytesRead);
// Write copied buffer to destination stream
destinationStream.BeginWrite(bufferForWrite, 0, bytesRead, BeginWriteCallback, null);
// Start the next read (looks like async recursion I guess)
sourceStream.BeginRead(bufferForRead, 0, BUFFER_SIZE, BeginReadCallback, null);
}
private static void BeginWriteCallback(IAsyncResult asyncRes)
{
// Finish writing to destination stream
destinationStream.EndWrite(asyncRes);
}
For .NET 3.5 and before try :
MemoryStream1.WriteTo(MemoryStream2);
Easy and safe - make new stream from original source:
MemoryStream source = new MemoryStream(byteArray);
MemoryStream copy = new MemoryStream(byteArray);
The following code to solve the issue copy the Stream to MemoryStream using CopyTo
Stream stream = new MemoryStream();
//any function require input the stream. In mycase to save the PDF file as stream
document.Save(stream);
MemoryStream newMs = (MemoryStream)stream;
byte[] getByte = newMs.ToArray();
//Note - please dispose the stream in the finally block instead of inside using block as it will throw an error 'Access denied as the stream is closed'

C# increase the size of reading binary data

I am using the below code from Jon Skeet's article. Of late, the binary data that needs to be processed has grown multi-fold. The binary data size file size that I am trying to import is ~ 900 mb almost 1 gb. How do I increase the memory stream size.
public static byte[] ReadFully (Stream stream)
{
byte[] buffer = new byte[32768];
using (MemoryStream ms = new MemoryStream())
{
while (true)
{
int read = stream.Read (buffer, 0, buffer.Length);
if (read <= 0)
return ms.ToArray();
ms.Write (buffer, 0, read);
}
}
}
Your method returns a byte array, which means it will return all of the data in the file. Your entire file will be loaded into memory.
If that is what you want to do, then simply use the built in File methods:
byte[] bytes = System.IO.File.ReadAllBytes(string path);
string text = System.IO.File.ReadAllText(string path);
If you don't want to load the entire file into memory, take advantage of your Stream
using (var fs = new FileStream("path", FileMode.Open))
using (var reader = new StreamReader(fs))
{
var line = reader.ReadLine();
// do stuff with 'line' here, or use one of the other
// StreamReader methods.
}
You don't have to increase the size of MemoryStream - by default it expands to fit the contents.
Apparently there can be problems with memory fragmentation, but you can pre-allocate memory to avoid them:
using (MemoryStream ms = new MemoryStream(1024 * 1024 * 1024)) // initial capacity 1GB
{
}
In my opinion 1GB should be no big deal these days, but it's probably better to process the data in chunks if possible. That is what Streams are designed for.

BinaryReader reading different length of data depending on BufferSize

The issue is as follows, I am using an HttpWebRequest to request some online data from dmo.gov.uk. The response I am reading using a BinaryReader and writing to a MemoryStream. I have packaged the code being used into a simple test method:
public static byte[] Test(int bufferSize)
{
var request = (HttpWebRequest)WebRequest.Create("http://www.dmo.gov.uk/xmlData.aspx?rptCode=D3B.2");
request.Method = "GET";
request.Credentials = CredentialCache.DefaultCredentials;
var buffer = new byte[bufferSize];
using (var httpResponse = (HttpWebResponse)request.GetResponse())
{
using (var ms = new MemoryStream())
{
using (var reader = new BinaryReader(httpResponse.GetResponseStream()))
{
int bytesRead;
while ((bytesRead = reader.Read(buffer, 0, bufferSize)) > 0)
{
ms.Write(buffer, 0, bytesRead);
}
}
return ms.GetBuffer();
}
}
}
My real-life code uses a buffer size of 2048 bytes usually, however I noticed today that this file has a huge amount of empty bytes (\0) at the end which bloats the file size. As a test I tried increasing the buffer size to near-on the file size I expected (I was expecting ~80Kb so made the buffer size 79000) and now I get the right file size. But I'm confused, I expected to get the same file size regardless of the buffer size used to read the data.
The following test:
Console.WriteLine(Test(2048).Length);
Console.WriteLine(Test(79000).Length);
Console.ReadLine();
Yields the follwoing output:
131072
81341
The second figure, using the high buffer size is the exact file size I was expecting (This file changes daily, so expect that size to differ after today's date). The first figure contains \0 for everything after the file size expected.
What's going on here?
You should change ms.GetBuffer(); to ms.ToArray();.
GetBuffer will return the entire MemoryStream buffer while ToArray will return all the values inside the MemoryStream.

HttpWebResponse + Stream.Read, adding null chars at the end

I'm trying to get a byte[] array filled with request response, without any extra garbage data.
This is how I fetch the data:
using (Stream MyResponseStream = hwresponse.GetResponseStream())
{
byte[] MyBuffer = new byte[4096];
int BytesRead;
while (0 < (BytesRead = MyResponseStream.Read(MyBuffer, 0, MyBuffer.Length)))
{
ByteArrayToFile("request.txt", MyBuffer);
}
}
I use the function 'ByteArrayToFile' to see what data has been recieved.
public void ByteArrayToFile(string _FileName, byte[] _ByteArray)
{
System.IO.FileStream _FileStream = new System.IO.FileStream(_FileName, System.IO.FileMode.Append, System.IO.FileAccess.Write);
_FileStream.Write(_ByteArray, 0, _ByteArray.Length);
_FileStream.Close();
}
I get request written to the file, but a lot of 'null' characters are added at the end. How do I trim them? Since I'm going to need this to handle binary files, how can I safely trim out the endings and have just pure array of response? Thanks!
You need to utilise the value BytesRead, this will indicate exactly how many bytes were received:
public void ByteArrayToFile(string _FileName, byte[] _ByteArray, int _BytesRead)
{
using (var _FileStream = new FileStream(
_FileName, FileMode.Append, FileAccess.Write))
{
_FileStream.Write(_ByteArray, 0, _BytesRead);
}
}
Otherwise you're writing out an array of length X which has only been populated with Y number of elements, causing a number of 'unused' elements in the array to also be written out. There is also the possibility of stale data remaining in the buffer with a pass, meaning misinformation could also end up being written out with the next write.
You should also dispose of FileStream instances when done (although Close does this for a Stream, I'd recommend the consistency of calling Dispose in one of two ways: explicitly or as illustrated in the code above, implicitly using the using construct).

Preserving binary data in streams

Using C#, I was surprised how complicated it seemed to preserve binary info from a stream. I'm trying to download a PNG datafile using the WebRequest class, but just transfering the resulting Stream to a file, without corrupting it was more verbose than I thought. First, just using StreamReader and StreamWriter was no good as the ReadToEnd() function returns a string, which effectivly doubles the size of the PNG file (probably due to the UTF conversion)
So my question is, do I really have to write all this code, or is there a cleaner way of doing it?
Stream srBytes = webResponse.GetResponseStream();
// Write to file
Stream swBytes = new FileStream("map(" + i.ToString() + ").png",FileMode.Create,FileAccess.Write);
int count = 0;
byte[] buffer = new byte[4096];
do
{
count = srBytes.Read(buffer, 0, buffer.Length);
swBytes.Write(buffer, 0, count);
}
while (count != 0);
swBytes.Close();
Using StreamReader/StreamWriter is definitely a mistake, yes - because that's trying to load the file as text, which it's not.
Options:
Use WebClient.DownloadFile as SLaks suggested
In .NET 4, use Stream.CopyTo(Stream) to copy the data in much the same way as you've got here
Otherwise, write your own utility method to do the copying, then you only need to do it once; you could even write this as an extension method, which means when you upgrade to .NET 4 you can just get rid of the utility method and use the built-in one with no change to the calling code:
public static class StreamExtensions
{
public static void CopyTo(this Stream source, Stream destination)
{
if (source == null)
{
throw new ArgumentNullException("source");
}
if (destination == null)
{
throw new ArgumentNullException("destination");
}
byte[] buffer = new byte[8192];
int bytesRead;
while ((bytesRead = source.Read(buffer, 0, buffer.Length)) > 0)
{
destination.Write(buffer, 0, bytesRead);
}
}
}
Note that you should be using using statements for the web response, response stream and output stream in order to make sure they're always closed appropriately, like this:
using (WebResponse response = request.GetResponse())
using (Stream responseStream = response.GetResponseStream())
using (Stream outputStream = File.OpenWrite("map(" + i + ").png"))
{
responseStream.CopyTo(outputStream);
}
You can call WebClient.DownloadFile(url, localPath).
In .Net 4.0, you can simplify your current code by calling Stream.CopyTo.

Categories

Resources