Generic error occurred in GDI+ - c#

I am getting the error "Generic error occurred in GDI+" in my sample code below. What I do is that I make a request to get response for many jpeg files available at live site.
When I get response, I save the file to my application's local folder
and converting these images to binary (bytes of array) so that I can save it into database.
private byte[] GetBinaryImageData(string imgURL)
{
HttpWebRequest Request = (HttpWebRequest)WebRequest.Create(imgURL);
WebResponse response = Request.GetResponse();
Stream str = response.GetResponseStream();
System.Drawing.Image objTempImg = System.Drawing.Image.FromStream(str);
objTempImg.Save(FileName, ImageFormat.Jpeg);
FileStream fileStream = new FileStream(FileName, FileMode.Open, FileAccess.Read);
byte[] buffer = new byte[fileStream.Length];
fileStream.Read(buffer, 0, (int)fileStream.Length);
fileStream.Close();
return buffer;
}
i don't get this error for all images, but it occurs for some of images. Anybody know the solution? I have already spent 2 days to oversome this

If I had to guess; it is having problems with handles occasionally, for the reason that you aren't disposing things correctly. This is especially important for things like GDI+. Introduce a few using statements to your code, since pretty much all of those objects are IDisposable:
HttpWebRequest Request = (HttpWebRequest)WebRequest.Create(imgURL);
using(WebResponse response = Request.GetResponse())
using(Stream str = response.GetResponseStream())
using(System.Drawing.Image objTempImg = System.Drawing.Image.FromStream(str))
{
objTempImg.Save(FileName, ImageFormat.Jpeg);
}
return File.ReadAllBytes(FileName);
(note I changed your file-reading code too, since it was hugely unreliable; it is incorrect to assume that Stream.Read actually reads all the data; you are supposed to check the return value and loop; but since you want it all, File.ReadAllBytes is easier).

Judging by this article it appears the stream the image is loaded from must exist for the lifetime of the image or bad things can happen.
I've run into this from time to time and found it easiest to copy the image data to a memory stream or a file my process controls and construct the image from that.
From the linked article:
GDI+, and therefore the System.Drawing namespace, may defer the
decoding of raw image bits until the bits are required by the image.
Additionally, even after the image has been decoded, GDI+ may
determine that it is more efficient to discard the memory for a large
Bitmap and to re-decode later. Therefore, GDI+ must have access to the
source bits for the image for the life of the Bitmap or the Image
object.
To retain access to the source bits, GDI+ locks any source file, and
forces the application to maintain the life of any source stream, for
the life of the Bitmap or the Image object.

Related

Image.GetThumbnailImage throws OutOfMemoryException

The following C# 3.5 code throws an exception in GetThumbnailImage:
Image img = null;
Image scaledImg = null;
byte[] imageData = File.ReadAllBytes("11001.jpg");
MemoryStream stream = new MemoryStream(imageData);
img = Image.FromStream(stream);
stream.Close();
stream.Dispose();
scaledImg = img.GetThumbnailImage(64, 64, null, IntPtr.Zero);
The problem is the disposing of the stream. If I remove the Close() and Dispose() statement, everything works fine. Does anyone know why this exception is thrown? Using a callback instead of null as parameter does not change the behavior.
I don't need a solution, I can use new Bitmap(img, new Size(width, height) for scaling. This is probably better and shoud have been used from the beginning.
Edit: Sorry, I forgot to mention that the exception only occurs in WindowsXP. Win7 and Win8 seem to handle the above code just fine.
GetThumbnailImage is a native method, using the same stream you passed while creating the Image. Only the .NET methods are using the actual data from the stream (it's loaded as the last thing while creating the image from a stream).
It's a pretty typical leaky abstraction. You thought you have an Image object that already has its data loaded, but the data is only loaded in the .NET part. Any method that works directly with the GDI+ image handle will still need the stream to be live.
Another example of the same problem is while trying to save the image in a different format. When you try to save with the same format, .NET just saves the byte data it has in memory, simple. If it's not the same, it will use the GDI+ method SaveImageToFile, which again requires the original stream to be preserved.
If you don't mind having the stream alive, you can just move the Dispose after everything you do with the images produced from that stream:
using (var stream = new MemoryStream(imageData))
{
var img = Image.FromStream(stream);
var scaledImg = img.GetThumbnailImage(64, 64, null, IntPtr.Zero);
scaledImg.Save(...);
}
If you do mind, the easiest thing you can do is this:
var bmp = new Bitmap(Image.FromStream(stream));
The Bitmap constructor will copy the image data, and you no longer need to keep the stream.
I think that GetThumbnailImage uses the stream to do its job so it needs it to be open.You can do it like this :
Image img = null;
Image scaledImg = null;
byte[] imageData = File.ReadAllBytes("11001.jpg");
using(MemoryStream stream = new MemoryStream(imageData))
{
img = Image.FromStream(stream);
scaledImg = img.GetThumbnailImage(64, 64, null, IntPtr.Zero);
}
(using closes and disposes the MemoryStream but it also handles exceptional situations to avoid memory leaks)
The Image object uses lazy evaluation. As you have already closed the stream, when it actually tries to read the image in order to get the thumbnail, it's not there anymore. Hence the error.

C# The Process Cannot Access the File (2nd time only)

I am trying download an image from a website and put it in a picturebox.
// response contains HttpWebResponse of the page where the image is
using (Stream inputStream = response.GetResponseStream()) {
using (Stream outputStream = File.Open(fileName, FileMode.Create)) {
byte[] buffer = new byte[4096];
int bytesRead;
do {
bytesRead = inputStream.Read(buffer, 0, buffer.Length);
outputStream.Write(buffer, 0, bytesRead);
} while (bytesRead != 0);
}
}
response.Close();
After that, the downloaded image is assigned to a PictureBox like such:
if (imageDownloaded) {
pictureBox1.Image = Image.FromFile(filePath);
}
This all works like a charm first time, but the second time I run the code I get System.IO.IOException: "Additional information: The process cannot access the file ...(file path) ... because it is being used by another process.". I have no idea why...
I looked at 4 other threads such as this one, but they were basically stressing out the need to close streams, which I do, so none of them helped me.
Before you recommend to use pictureBox1.Load() I can't because I need the image downloaded for further development.
EDIT 1: I have actually tried to dispose the image by putting pictureBox1.Image = null before the code above is executed. It is still giving me an exception.
The Image.FromFile docs state:
The file remains locked until the Image is disposed.
So you need to dispose your Image too to be able to overwrite the file.
Try using the Clone method on your image and dispose the original Image.
The thing is related to the Image.FromFile. If we read a documentation, there is a note:
The file remains locked until the Image is disposed.
This pretty much explains the behavior you get.
To resolve this, you might need to create a copy of the image and assign it to a PictureBox .
From MSDN:
The file remains locked until the Image is disposed.
This means its the PictureBox that is holding the file open.
You have two options:
Dispose of the PictureBox image before writing the new file.
Download the file, make a copy of it and load the copy into the PictureBox - this allows you to write over the downloaded file freely.
I'll be the contrarian here. :)
While cloning/copying the image will resolve the exception, it raises a different question: why are you overwriting to the same file for a new image?
It seems to me that a better approach would be to download subsequent files to different file names (if they are really different files), or to simply reuse the file already downloaded instead of hitting the network again (if it's just a new request for the same file).
Even if you want to repeatedly download the same file (perhaps you expect the actual file contents to change), you can still download to a new name (e.g. append a number to the file name, use a GUID for the local file name, etc.)

StreamReader.ReadToEnd causes massive memory usage / leaks

What it does: for each EncryptedBase64PictureFile, reads the content, decrypts the base64 string and creates a picturebox.
Where the problem is: Insane memory usage! I guess that some data after each loop are not deleted properly. For example, 100 loops with input around 100MB of encrypted data, which should generate around 100MB of image files, uses around 1.5 GB of memory! And when i try to decrypt just a little more data, around 150MB, i get OutOfMemory exception. Visual studio's memory profiling report says, that " string fileContent= reader.ReadToEnd();" line is responsible for 80% of allocations.
for each EncryptedBase64PictureFile {
Rijndael rijAlg = Rijndael.Create();
rijAlg.Key = ASCIIEncoding.ASCII.GetBytes(sKey);
rijAlg.IV = ASCIIEncoding.ASCII.GetBytes(sKey);
FileStream fsread = new FileStream(EncryptedBase64PictureFile, FileMode.Open, FileAccess.Read);
ICryptoTransform desdecrypt = rijAlg.CreateDecryptor();
CryptoStream cryptostreamDecr = new CryptoStream(fsread,desdecrypt, CryptoStreamMode.Read);
StreamReader reader = new StreamReader(cryptostreamDecr);
string fileContent= reader.ReadToEnd(); //this should be the memory eater
var ms = new MemoryStream(Convert.FromBase64String(fileContent));
PictureBox myPictureBox= new PictureBox();
myPictureBox.Image = Image.FromStream(ms);
ms.Close();
reader.Close();
cryptostreamDecr.Close();
fsread.Close();
}
So the question is, is there a way to dealocate memory properly after each loop? Or is the problem in something else?
Thanx for each idea!
EDIT:
Of course i tried to dispose() all 4 streams, but the result was the same...
ms.Dispose();
reader.Dispose();
cryptostreamDecr.Dispose();
fsread.Dispose();
EDIT:
Found the problem. It was not dispose(), but creating the picture from stream. After deleting the picture, memory usage went from 1.5GB to 20MB.
EDIT:
Pictures are about 500kb in .jpg format, around 700kb in base64 encrypted format. But i have really no idea, how big is the imagebox object.
EDIT:
100 loops with input around 100MB was meant that each loop takes around 1MB, 100MB is total for 100 loops
Another answer: Live with it.
As in: You work with 100mb blocks in what appears to be a 32 bit application. This will not work without reusing buffers due to large object heap and general memory fragmentation.
As in: The memory is there, just not in large enough blocks. THis results in allocation errors.
There is no real way around this except going 64 bit where the larger address space handles the issue.
Information about this may be at:
https://connect.microsoft.com/VisualStudio/feedback/details/521147/large-object-heap-fragmentation-causes-outofmemoryexception
https://www.simple-talk.com/dotnet/.net-framework/large-object-heap-compaction-should-you-use-it/
has a possible solution these days, engabling large object heap conpaction:
GCSettings.LargeObjectHeapCompactionMode = GCLargeObjectHeapCompactionMode.CompactOnce;
GC.Collect(); // This can be omitted
LOH oprations are expensibe, but 100mb areas running aruond is not exactly a GC recommendet scenario. Not in 32 bit.
Use a base 64 transform when decrypting your stream. Do not use Convert.FromBase64String as this requires all data to be in memory.
using (FileStream f64 = File.Open(fileout, FileMode.Open) ) // content is in base64
using (var cs=new CryptoStream(f64, new FromBase64Transform(), CryptoStreamMode.Read ) ) // transform passed to constructor
using(var fo =File.Open(filein +".orig", FileMode.Create))
{
cs.CopyTo(fo); // stream is accessed as if it was already decrypted
}
Code sample taken from this related answer -
Convert a VERY LARGE binary file into a Base64String incrementally

Loading saved byte array to memory stream causes out of memory exception

At some point in my program the user selects a bitmap to use as the background image of a Panel object. When the user does this, the program immediately draws the panel with the background image and everything works fine. When the user clicks "Save", the following code saves the bitmap to a DataTable object.
MyDataSet.MyDataTableRow myDataRow = MyDataSet.MyDataTableRow.NewMyDataTableRow(); //has a byte[] column named BackgroundImageByteArray
using (MemoryStream stream = new MemoryStream())
{
this.Panel.BackgroundImage.Save(stream, ImageFormat.Bmp);
myDataRow.BackgroundImageByteArray = stream.ToArray();
}
Everything works fine, there is no out of memory exception with this stream, even though it contains all the image bytes. However, when the application launches and loads saved data, the following code throws an Out of Memory Exception:
using (MemoryStream stream = new MemoryStream(myDataRow.BackGroundImageByteArray))
{
this.Panel.BackgroundImage = Image.FromStream(stream);
}
The streams are the same length. I don't understand how one throws an out of memory exception and the other doesn't. How can I load this bitmap?
P.S. I've also tried
using (MemoryStream stream = new MemoryStream(myDataRow.BackgroundImageByteArray.Length))
{
stream.Write(myDataRow.BackgroundImageByteArray, 0, myDataRow.BackgroundImageByteArray.Length); //throw OoM exception here.
}
The issue I think is here:
myDataRow.BackgroundImageByteArray = stream.ToArray();
Stream.ToArray() . Be advised, this will convert the stream to an array of bytes with length = stream.Length. Stream.Legnth is size of the buffer of the stream, which is going to be larger than the actual data that is loaded into it. You can solve this by using Stream.ReadByte() in a while loop until it returns a -1, indicating the end of the data within the stream.
You might give this library a look.
http://arraysegments.codeplex.com/
Project Description
Lightweight extension methods for ArraySegment, particularly useful for byte arrays.
Supports .NET 4.0 (client and full), .NET 4.5, Metro/WinRT, Silverlight 4 and 5, Windows Phone 7 and 7.5, all portable library profiles, and XBox.

How can I send and receive a large file over HTTP in C#

I am working on developing an HTTP Server/Client and I can currently send small files over it such as .txt files and other easy to read files that do not require much memory. However when I want to send a larger file say a .exe or large .pdf I get memory errors. This are occurring from the fact that before I try to send or receive a file I have to specify the size of my byte[] buffer. Is there a way to get the size of the buffer while reading it from stream?
I want to do something like this:
//Create the stream.
private Stream dataStream = response.GetResponseStream();
//read bytes from stream into buffer.
byte[] byteArray = new byte[Convert.ToInt32(dataStream.Length)];
dataStream.read(byteArray,0,byteArray.Length);
However when calling "dataStream.Length" it throws the error:
ExceptionError: This stream does not support seek operations.
Can someone offer some advice as to how I can get the length of my byte[] from the stream?
Thanks,
You can use CopyTo method of the stream.
MemoryStream m = new MemoryStream();
dataStream.CopyTo(m);
byte[] byteArray = m.ToArray();
You can also write directly to file
var fs = File.Create("....");
dataStream.CopyTo(fs);
The network layer has no way of knowing how long the response stream is.
However, the server is supposed to tell you how long it is; look in the Content-Length response header.
If that header is missing or incorrect, you're out of luck; you'll need to keep reading until you run out of data.

Categories

Resources