Random error Memory stream is not expandable - c#

I was updating word document content using following code which was randomly throwing error Memory stream is not expandable:
MemoryStream TemplateFileMS = new MemoryStream(fileBytes);
using (WordprocessingDocument wordDoc = WordprocessingDocument.Open(TemplateFileMS, true))
//...
// some code
//...
wordDoc.MainDocumentPart.Document.Save(); // Exception here
After changing the code to below, the error is not occurring.
MemoryStream TemplateFileMS = new MemoryStream(0);
TemplateFileMS.Write(fileBytes, 0, fileBytes.Length);
So I am able to fix the issue. But I didn't see the above error in my dev environment on Azure App Service but in the production Azure App Service I was getting the Memory is not expandable error randomly.
Is it related to the number of bytes/updates that is making difference here? e.g. while testing I make only few updates but in some scenarios there are more updates which requires extra memory than the set capacity.
I tried adding more updates to the document but I was not able to reproduce this error with previous code.
Thank you!

Normally, you'd only initialize a MemoryStream from a byte[] to read values from an existing buffer. But you are writing to the stream. That means you either need to let the MemoryStream manage the buffers itself (by not giving it one), or the buffer you give it needs to be big enough. In most cases the first option is simpler. It is complaining because you have given it a buffer that turned out to be too small, but it can't resize it because you defined the buffer externally (rather than letting the MemoryStream have control).

Related

Decompressing a MemoryStream using Zlib

I am writing software that deals with a large collection of files that contain zlib compressed data in different sections of the file rather than the entire file itself. I know how to grab the section(s) I need. However I was having trouble getting the documented zlib stream classes to work properly. I googled and tried several different solutions and could not get them to work except one that uses a static method. The following code works just fine but does not work directly with the Zlib stream class as I would prefer:
// reader is the BinaryReader for the original file and...
// The "Data" section consists of a UInt32 and the compressed data
byte[] compressedStream = reader.ReadBytes((int)Size - 4); // name was kept for code compatibility
MemoryStream deflatedStream = new MemoryStream(ZlibStream.UncompressBuffer(compressedStream), true);
I don't really have any issues with using the code above since it gives me the decompressed data I need. However, I am baffled as to why my original code which instances the zlib stream class directly did not work (since they use the same basic API):
MemoryStream compressedStream = new MemoryStream(reader.ReadBytes((int)Size - 4));
ZlibStream deflatedStream = new ZlibStream(compressedStream, CompressionMode.Decompress, true);
Accessing almost any property of "deflatedStream" results in an error. I assume this means it did not work. It might be worth noting that I have not yet used the DotNetZip lib and used Zlib.Portable instead (the second most popular library). However, the API seems to be the same.

MemoryStream - OutOfMemoryException when trying to allocate space

I'm attempting to take a large file, uploaded from a web app, and make it a memorystream for processing later. I was receiving OutOfMemory exceptions when trying to copy the HttpPostedFileBase's inputstream into a new MemoryStream. During troubleshooting, I tried just creating a new MemoryStream and allocate the same amount of space (roughly) as the length of the InputStream (935,638,275), like so:
MemoryStream memStream = new MemoryStream(935700000);
Even doing this results in a System.OutOfMemoryException on this line.
I only slightly understand MemoryStreams, and this seems to be something to do with how MemoryStreams buffer data. Is there a way for me to get all of the data into one MemoryStream without too much fuss?
I am not sure what the processing involves, but the HttpPostedFileBase already contains a stream with the data. You can use that stream to process what you need to do.
If you really need to move back and forth or multiple times over the stream, and the input stream does not support seeking/positioning, you may want to stream the data to a temporary local file first and then use a file stream to do your processing against that file.
If many people uploading via your web app, the array size you specified would quickly eat up all memory using a MemoryStream.

Loading saved byte array to memory stream causes out of memory exception

At some point in my program the user selects a bitmap to use as the background image of a Panel object. When the user does this, the program immediately draws the panel with the background image and everything works fine. When the user clicks "Save", the following code saves the bitmap to a DataTable object.
MyDataSet.MyDataTableRow myDataRow = MyDataSet.MyDataTableRow.NewMyDataTableRow(); //has a byte[] column named BackgroundImageByteArray
using (MemoryStream stream = new MemoryStream())
{
this.Panel.BackgroundImage.Save(stream, ImageFormat.Bmp);
myDataRow.BackgroundImageByteArray = stream.ToArray();
}
Everything works fine, there is no out of memory exception with this stream, even though it contains all the image bytes. However, when the application launches and loads saved data, the following code throws an Out of Memory Exception:
using (MemoryStream stream = new MemoryStream(myDataRow.BackGroundImageByteArray))
{
this.Panel.BackgroundImage = Image.FromStream(stream);
}
The streams are the same length. I don't understand how one throws an out of memory exception and the other doesn't. How can I load this bitmap?
P.S. I've also tried
using (MemoryStream stream = new MemoryStream(myDataRow.BackgroundImageByteArray.Length))
{
stream.Write(myDataRow.BackgroundImageByteArray, 0, myDataRow.BackgroundImageByteArray.Length); //throw OoM exception here.
}
The issue I think is here:
myDataRow.BackgroundImageByteArray = stream.ToArray();
Stream.ToArray() . Be advised, this will convert the stream to an array of bytes with length = stream.Length. Stream.Legnth is size of the buffer of the stream, which is going to be larger than the actual data that is loaded into it. You can solve this by using Stream.ReadByte() in a while loop until it returns a -1, indicating the end of the data within the stream.
You might give this library a look.
http://arraysegments.codeplex.com/
Project Description
Lightweight extension methods for ArraySegment, particularly useful for byte arrays.
Supports .NET 4.0 (client and full), .NET 4.5, Metro/WinRT, Silverlight 4 and 5, Windows Phone 7 and 7.5, all portable library profiles, and XBox.

Reading an image file from local storage on mono for android

In mono for android I have an app that saves images to local storage for caching purposes. When the app launches it tries to load images from the cache before trying to load them from the web.
I'm currently having a hard time finding a good way to read and load them from local storage.
I'm currently using something equivilant to this:
List<byte> byteList = new List<byte>();
using (System.IO.BinaryReader binaryReader = new System.IO.BinaryReader(context.OpenFileInput("filename.jpg")))
{
while (binaryReader.BaseStream.IsDataAvailable())
{
byteList.Add(binaryReader.ReadByte());
}
}
return byteList.toArray();
OpenFileInput() returns a stream that does not give me a length so I have to read one byte at a time. It also can't seek. This seems to be causing images to load much slower than they aughto. Loading images from Resrouce.Drawable is almost instantanious by comparison, but with my method there a very noticable pause, maybe 300ms, for loading a 8kb file. This seems like a really obvious task to be able to do, but I've tried many solutions and searched a lot for advise but to no avail.
I've also noticed this code seems to crash with an EndOfStream exception when not run on the UI thread.
Any help would be hugely appreciated
What do you intend on doing with the List<byte>? You want to "load images from the cache," but you don't specify what you want to load them into.
If you want to load them into a Android.Graphics.Bitmap, you could use BitmapFactory.DecodeStream(Stream):
Bitmap bitmap = BitmapFactory.DecodeStream(context.OpenFileInput("filename.jpg"));
This would remove the List<byte> intermediary.
If you really need all the bytes (for whatever reason), you can rely on the fact that System.Environment.GetFolderPath(System.Environment.SpecialFolder.Personal) is the same as Context.FilesDir, which is what context.OpenFileInput() will use, permitting:
byte[] bytes = System.IO.File.ReadAllBytes(
Path.Combine (
System.Environment.GetFolderPath(System.Environment.SpecialFolder.Personal),
"filename.jpg"));
However, if this is truly a cache, you should be using Context.CacheDir instead of Context.FilesDir, which is Path.GetTempPath returns:
byte[] cachedBytes = System.IO.File.ReadAllBytes(
Path.Combine(System.IO.Path.GetTempPath(), "filename.jpg"));

Can DeflateStream or GZipStream be used to deflate an uncompressed file?

I'm trying to implement file compression to an application. The application has been around for a while, so it needs to be able to read uncompressed documents written by previous versions. I expected that DeflateStream would be able to process an uncompressed file, but for GZipStream I get the "The magic number in GZip header is not correct" error. For DeflateStream I get "Found invalid data while decoding". I guess it does not find the header that marks the file as the type it is.
If it's not possible to simply process an uncompressed file, then 2nd best would be to have a way to determine whether a file is compressed, and choose the method of reading the file. I've found this link: http://blog.somecreativity.com/2008/04/08/how-to-check-if-a-file-is-compressed-in-c/, but this is very implementation specific, and doesn't feel like the right approach. It can also provide false positives (I'm sure this would be rare, but it does indicate that it's not the right approach).
A 3rd option I've considered is to attempt using DeflateStream, and fallback to normal stream IO if an exception occurs. This also feels messy, and causes VS to break at the exception (unless I untick that exception, which I don't really want to have to do).
Of course, I may simply be going about it the wrong way. This is the code I've tried in .Net 3.5:
Stream reader = new FileStream(fileName, FileMode.Open, readOnly ? FileAccess.Read : FileAccess.ReadWrite, readOnly ? FileShare.ReadWrite : FileShare.Read);
using (DeflateStream decompressedStream = new DeflateStream(reader, CompressionMode.Decompress))
{
workspace = (Workspace)new XmlSerializer(typeof(Workspace)).Deserialize(decompressedStream);
if (readOnly)
{
reader.Close();
workspace.FilePath = fileName;
}
else
workspace.SetOpen(reader, fileName);
}
Any ideas?
Thanks!
Luke.
Doesn't your file format have a header? If not, now is the time to add one (you're changing the file format by supporting compression, anyway). Pick a good magic value, make sure the header is extensible (add a version field, or use specific magic values for specific versions), and you're ready to go.
Upon loading, check for the magic value. If not present, use your current legacy loading routines. If present, the header will tell you whether the contents are compressed or not.
Update
Compressing the stream means the file is no longer an XML document, and thus there's not much reason to expect the file can't contain more than your data stream. You really do want a header identifying your file :)
The below is example (pseudo)-code; I don't know if .net has a "substream", SubRangeStream is likely something you'll have to code yourself (DeflateStream probably adds it's own header, so a substream might not be necessary; could turn out useful further down the road, though).
Int64 oldPosition = reader.Position;
reader.Read(magic, 0, magic.length);
if(IsRightMagicValue(magic))
{
Header header = ReadHeader(reader);
Stream furtherReader = new SubRangeStream(reader, reader.Position, header.ContentLength);
if(header.IsCompressed)
{
furtherReader = new DeflateStream(furtherReader, CompressionMode.Decompress);
}
XmlSerializer xml = new XmlSerializer(typeof(Workspace));
workspace = (Workspace) xml.Deserialize(furtherReader);
} else
{
reader.Position = oldPosition;
LegacyLoad(reader);
}
In real-life, I would do things a bit differently - some proper error handling and cleanup, for instance. Also, I wouldn't have the new loader code directly in the IsRightMagicValue block, but rather I'd spin off the work either based on the magic value (one magic value per file version), or I would keep a "common header" portion with fields common to all versions. For both, I'd use a Factory Method to return an IWorkspaceReader depending on the file version.
Can't you just create a wrapper class/function for reading the file and catch the exception? Something like
try
{
// Try return decompressed stream
}
catch(InvalidDataException e)
{
// Assume it is already decompressed and return it as it is
}

Categories

Resources