I met the common GDI+ error when trying to load a JPG image by C#. I am not sure if it is due to the high resolution of this JPG (46495px*4536px) because loading other low resolution JPG files works fine. The issue JPG file size is 4696KB.
Code:
var newImage = Image.FromFile("demo.jpg"); //issue jpg
It also failed when using Image.FromStream() API:
var stream = File.OpenRead("demo.jpg");
var image = Image.FromStream(stream);
Much appreciation if anyone could help explain
You need the available RAM to store the decompressed image bitmap
On a 32bit display you will require width * height * 4 + c bytes free, where c is unknown depending on the implementation of the drawing classes used.
Example
In your specific case, the calculation is as follows:
46495 * 4536 * 4 + c = 843605280 bytes + c = 805mb + c
Use the following to see how much memory is available for your bitmap.
Include a reference to the VisualBasic dll:
using Microsoft.VisualBasic.Devices;
The method is as follows:
Console.Out.Write(new ComputerInfo().AvailablePhysicalMemory + "bytes free");
...or...
Console.Out.Write((ComputerInfo().AvailablePhysicalMemory / 1048576) + "mb free");
Find c
To find c, use the method above both before AND after an image load.
By loading a number of images successfully and recording the memory used before and after the load.
Experiment by comparing the memory used before and after loading different sizes of image, taking into account the size of the bitmap and you will discover a close approximation of c.
Be aware that all image types are converted to a raw bitmap internally for viewing, regardless of whether it's stored as a .jpg, .png, .gif or whatever. So when I say Bitmp, I'm not referring to the extension .bmp. Instead, I am referring to a bitmp in a literal sense as a raw image file i.e. a map of bits.
the GDI+ will throw an "OutOfMemoryException" if it does not support the pixel format of the file.
Related
I have an image https://drive.google.com/file/d/16Xotc-2CJ6HkEJDysfKBkjClkU1OGiyQ/view?usp=sharing that is GrayScale but every library I have tried, ImageMagick, ImageSharp, System.Drawing seem to interpret it as black and white, but when you open it in ImageJ or Photoshop or Incarta or many other software you can clearly see it is grayscale.
can anyone help me find a way to display this image? here is something I've tried but i've tried almost a dozen different things
TiffEncoder encoder = new TiffEncoder();
encoder.PhotometricInterpretation = SixLabors.ImageSharp.Formats.Tiff.Constants.TiffPhotometricInterpretation.BlackIsZero;
SixLabors.ImageSharp.Image image = SixLabors.ImageSharp.Image.Load(mysteryTiff);
PixelTypeInfo pixType = image.PixelType;
// Stretches the image to fit the pictureBox.
Stream stream = new MemoryStream();
image.SaveAsTiff(stream, encoder);
stream.Position = 0;
MagickImage magickImage = new MagickImage(stream);
pictureBox1.SizeMode = PictureBoxSizeMode.StretchImage;
pictureBox1.ClientSize = new System.Drawing.Size(1200, 1200);
pictureBox1.Image = magickImage.ToBitmap();
Can anyone display this image correctly. It will display correctly when uploaded to
What you have there, according to the image tag directory, is a 2024x2024 16-bpp greyscale LZW-compressed extended TIFF. It even opens in some software, which proves that it's not malformed. So far so good.
Now here's where it breaks down: 16-bpp greyscale is not supported by a lot of things. The 'why' is mildly convoluted, having to do largely with "but we all use 8 bits per channel, and so does the hardware, so why bother", but the end result isn't: if you want to use anything above 8 bits per channel, you'll either have to find something that will do the work for you or convert the data to 8-bpp at some point.
Even when the file format explicitly support 16-bpp greyscale (TIFF and PNG for instance), most libraries tend not to support either read or write in that format because it is so rarely used that they don't bother to implement it. I ended up writing my own PNG encoder for 16-bpp greyscale images (converted from 12-bpp and 16-bpp XRAY images), but the images aren't viewable in most programs that supposedly support the full PNG standard.
In this case your best option is probably going to be to write a conversion of your own for this type of file. Assuming that the same format (16-bpp, LZW-compressed) is produced by the source application every time, it shouldn't be too difficult to convert the pixel buffer to 8-bpp and save out as TIFF, PNG or whatever you like. You'll lose half of your greyscale (depth) resolution, but for display purposes they're not going to help much anyway. It only really matters when there's a good reason to retain the full range of values.
A little bit of background:
I'm writing a bar code image scanner desktop app using WPF, that can take input from either a file location (previously scanned image) or have it come directly from a scanner (using NTWAIN). In both cases I create or get a stream.
Now when I create a new Bitmap from the stream and save it as a JPEG file using an Encoder
using (var bmp = Image.FromStream(rawStream))
{
EncoderParameter ratio = new EncoderParameter(Encoder.Quality, 100L);
EncoderParameter depth = new EncoderParameter(Encoder.ColorDepth, 8L);
EncoderParameters codecParams = new EncoderParameters(2);
codecParams.Param[0] = ratio;
codecParams.Param[1] = depth;
ImageCodecInfo jpegCodecInfo = ImageCodecInfo.GetImageEncoders().FirstOrDefault(x => x.FormatID == ImageFormat.Jpeg.Guid);
bmp.Save(file.FileFullPath, jpegCodecInfo, codecParams); // Save to JPG
}
or the built in
bmp.Save(file.FileFullPath, ImageFormat.Jpeg);
I tend to end up with much larger file sizes. Of course, this isn't always the case, but definately true when I'm loading a small black and white tiff file into memory and encoding as jpg.
My knowledge on image handling is rudimentary, but I think it is because the jpg files are saved with a color depth of 24 bits and the tiff images are originally stored as 1 bit. (Black and white)
No matter what I do, I can't get the jpg files to match the original file's bit depth.
The only work around I found is simply renaming the file to "filename.jpg" and saving like so
using (Bitmap bmp = new Bitmap(rawStream))
{
Save(file.FileFullPath);
}
But this feels like a solution that won't work indefinitely (as a side question, can one simply rename any *.bmp or *tiff file to *.jpg and it will still work?)
Based on my initial research it seems like
bmp.Save()
doesn't honor the encoding parameter for bit depth in jpeg images. Understandably my clients won't be happy having files grow from 16kb to 200kb for "no reason".
Is there a known work around for this problem or am I missing something obvious when it comes to working with streams and images?
JPEG works best for photographs with a multitude of colors, shades and gradients. Typical bit-depths: 8 (for greyscale) or 24 (for full color).
If you want monochrome (1-bit), I'd recommend agains using JPEG, not least because JPEG will introduce encoding artifacts that may not matter for photographs, but which will look like "added pepper and salt" if your original source is 1-bit. And the more you compress them, the more it will be there.
You should try using PNG instead, it has no such artifacts, and is better suited for digital sources with sharp edges.
You could also try making the TIFF smaller by 50% or 75% using a smart resize algorithm (using e.g. 8-bit output) that will convert micro-dots in the original into small gradients in the output. I did so long ago with 1-bit fax/scanner images, with actually quite good results. But too long to still have those sources.
I'm currently working on a small program to read png files from disk, do some modifications and save it back. Everything is running smoothly except for one small problem, after I saved the file back to disk, its size always increases, for example, a 27.1MB file will become 33.3MB.
After some debugging I finally narrow it down to my reading and saving code. This is the code I'm currently using:
Bitmap img = new Bitmap(<path to file>);
//omitted
img.Save(<path to new file>, ImageFormat.Png);
I've verified no matter if I do or do not make any modification, simply reading and saving the image will cause it size to change. Furthermore, if I opened the saved file with Paint and save from there, the file will shrink back to original size.
How do I read and save the image without changing its size?
Apart from the color depth and how many channels (w/o alpha) are used, saved PNG file size depends mainly on two factors:
How the pre-processing on image lines (called filtering) is done.
The compression level for the deflate algorithm (0-9).
This two factors will greatly affect the output image file size. Filtering is empirical and you can use one out of 4 filtering algorithm for all image lines or different algorithms for different lines or even adaptively try different algorithms on individual lines and choose the largest compression rate. The adaptively way is the most time consuming and impractical for most image writers.
After the filtering, image data is deflate compressed. The compression level for deflate algorithm usually ranges from 0-9 from lowerest to highest compression rate. The higher the compression rate, the slower the compression process. Usually 4 is the best for most of the images.
The filtering process plays a very important sometimes crucial role in PNG compression process. Different filtering algorithm may result in large difference in saved image size. On the other hand, image size is less sensitive to compression level.
You can use tools like TweakPNG to check about the color depth and number of channels the image contains. If the original and the re-saved image has the same color depth and channels, then most probably the filtering and compression level are the culprit for the increased file size.
The truth is if the encoder is not optimized, more often than not, the file size will increase. There are however a lot PNG optimization softwares out there if you don't mind post-processing your resulting images.
Have you tried playing with the Endoder.ColorDepth field? PNG also supports transparency and might be saving some information not needed by your image.
ImageCodecInfo pngCodec = ImageCodecInfo.GetImageEncoders().Where(codec => codec.FormatID.Equals(ImageFormat.Png.Guid)).FirstOrDefault();
if (pngCodec != null)
{
EncoderParameters parameters = new EncoderParameters();
parameters.Param[0] = new EncoderParameter(Encoder.ColorDepth, 24); //8, 16, 24, 32 (base on your format)
image.Save(stream, pngCodec, parameters);
}
Additional info here: https://msdn.microsoft.com/en-us/library/system.drawing.imaging.encoder.colordepth(v=vs.110).aspx
I think you are missing the compression part.
Add to your code like this -
Bitmap img = new Bitmap(<path to file>);
here is what you missed -
ImageCodecInfo myImageCodecInfo = GetEncoderInfo("image/jpeg");
EncoderParameter myEncoderParameter = new EncoderParameter(Encoder.Quality, 25L);
EncoderParameters myEncoderParameters.Param[0] = myEncoderParameter;
and save like this -
img.Save(<path to file>, myImageCodecInfo, myEncoderParameters);
Here is the MSDN link. hope it helps.
if I try to create a bitmap bigger than 19000 px I get the error: Parameter is not valid.
How can I workaround this??
System.Drawing.Bitmap myimage= new System.Drawing.Bitmap(20000, 20000);
Keep in mind, that is a LOT of memory you are trying to allocate with that Bitmap.
Refer to http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/37684999-62c7-4c41-8167-745a2b486583/
.NET is likely refusing to create an image that uses up that much contiguous memory all at once.
Slightly harder to read, but this reference helps as well:
Each image in the system has the amount of memory defined by this formula:
bit-depth * width * height / 8
This means that an image 40800 pixels by 4050 will require over 660
megabytes of memory.
19000 pixels square, at 32bpp, would require 11552000000 bits (1.37 GB) to store the raster in memory. That's just the raw pixel data; any additional overhead inherent in the System.Drawing.Bitmap would add to that. Going up to 20k pixels square at the same color depth would require 1.5GB just for the raw pixel memory. In a single object, you are using 3/4 of the space reserved for the entire application in a 32-bit environment. A 64-bit environment has looser limits (usually), but you're still using 3/4 of the max size of a single object.
Why do you need such a colossal image size? Viewed at 1280x1024 res on a computer monitor, an image 19000 pixels on a side would be 14 screens wide by 18 screens tall. I can only imagine you're doing high-quality print graphics, in which case a 720dpi image would be a 26" square poster.
Set the PixelFormat when you new a bitmap, like:
new Bitmap(2000, 40000,PixelFormat.Format16bppRgb555)
and with the exact number above, it works for me. This may partly solve the problem.
I suspect you're hitting memory cap issues. However, there are many reasons a bitmap constructor can fail. The main reasons are GDI+ limits in CreateBitmap. System.Drawing.Bitmap, internally, uses the GDI native API when the bitmap is constructed.
That being said, a bitmap of that size is well over a GB of RAM, and it's likely that you're either hitting the scan line size limitation (64KB) or running out of memory.
Got this error when opening a TIF file. The problem was due to not able to open CMYK. Changed colorspace from RGB to CMYK and didn't get an error.
So I used taglib library to get image file size instead.
Code sample:
try
{
var image = new System.Drawing.Bitmap(filePath);
return string.Format("{0}px by {1}px", image.Width, image.Height);
}
catch (Exception)
{
try
{
TagLib.File file = TagLib.File.Create(filePath);
return string.Format("{0}px by {1}px", file.Properties.PhotoWidth, file.Properties.PhotoHeight);
}
catch (Exception)
{
return ("");
}
}
I'm making a C#/WPF/Windows 8 App store app and I'm trying to load up some PNGs/JPGs to display them in view. The images are all reasonably high resolution, but the file sizes are normally only around 200k or so. The problem is that when I load them up using the BitmapImage class (which is the only one I can find) the total memory used jumps up to 100s of megs. From what I can tell it takes the png/jpb and converts it to a bitmap image, which massively increases the memory usage. So far I haven't found a way around this, although it seems like there should be a simple solution.
Is there something really obvious I'm missing?
My code below
private async Task TestFunction(IReadOnlyList<StorageFile> files)
{
var images = new ObservableCollection<Image>();
imagePanel.ItemsSource = coverImages;
foreach (var file in files)
{
var bitmap = new BitmapImage();
var item = await file.OpenAsync(FileAccessMode.Read);
bitmap.SetSource(item);
var image = new Image();
image.Source = bitmap;
image.Height = 200;
images.Add(image);
}
}
If the image on disk is in compressed format (and most image file formats use some form of compression), the in-memory footprint will be larger.
If the image is 100x100 pixels in size and uses 8bits for colour depth, the raw data for that image would take up 100x100=10,000 bytes and that's the amount of data that has to be rendered to the screen.
If your looking for a way to reduce memorty usage in your WPF application there are a few option you can try.
Don't cache the images in memory, or selecting the best time to
load the images, using the BitmapCacheOption e.g: bitmap.CacheOption = BitmapCacheOption.None
this will fill the image as needed from disk, if the images are only 200k the performance drop should not be too bad, but it will not be as fast as caching
Make sure you not rendering images bigger than they need to be, If
the Element you are displaying the image on is 200x200 and the image
is 1024x768 you can set the DecodePixelWidth, this will create the
Bitmap to the size you define instead of its actual size.
e.g: bitmap.DecodePixelWidth = 200
PNG, JPG and all other forms of image compression are useful for storage only. In order to display compressed image content in WPF you have to decompress it to Bitmap which is a raw one-to-one representation of the image data.
If you were not to store decompressed image data in memory, then every time the system tried to reference the image for display it would have to decompress the image again, using precious CPU resources. In the case of popular formats like PNG or JPG, the compression and decompression process is rather complex.
There are image compression formats out there which are designed for dynamic decompression. These formats, such as DXT1-5, however are typically only supported by 3D libraries. (more info here)