Stitching Together Thousands Of Bitmaps - c#

I have an List as such
List<Bitmap> imgList = new List<Bitmap>();>
i need to scroll through that list, very quickly and stitch all the images into one. Like so
using (Bitmap b = new Bitmap(imgList[0].Width, imgList[0].Height * imgList.Count, System.Drawing.Imaging.PixelFormat.Format24bppRgb))
{
using (Graphics g = Graphics.FromImage(b))
{
for (int i=0;i<imgList.Count;i++)
{
g.DrawImage(imgList[i], 0, i * imgList[i].Height);
}
}
b.Save(fileName, ImageFormat.Bmp);
}
imgList.Clear()
The problem that i'm running into is that the images are 2000 wide by 2 high, and there could be 30,000-100,000 images in the array. When I try to make a blank bitmap that size, it get a parameter not found error. Any help would be GREATLY appreciated.

The size of the block of memory you need is 2 x 100,000 x 20,000 x # bytes per pixel, which is going to be either 12,000,000,000 bytes for 24 bits per pixel and 16,000,000,000 bytes for 32 bits per pixel. So in other words ~12GB or ~16GB. A 32 bit address space is just too darn small for that amount of memory, so sucks to be you.
Or does it?
Since you want to create a file from this, you should be concerned with the file limits rather than your own memory limits. Lucky for you, the size of the integer type used for image dimensions in a BMP is 32 bit, which means that a 200,000 x 2,000 image is totally within those limits. So whether or not you can make that image in memory, you can make the file.
This involves you making your own version of a BMP encoder. It's not that bad - it's most writing a BMP header, a DIB header (110 bytes total) and then raw pixel data. Of course, once done you'll be hard-pressed to find code that will open it, but that's someone else's problem, right?

Related

Parameter is not valid - Bitmap constructor error [duplicate]

if I try to create a bitmap bigger than 19000 px I get the error: Parameter is not valid.
How can I workaround this??
System.Drawing.Bitmap myimage= new System.Drawing.Bitmap(20000, 20000);
Keep in mind, that is a LOT of memory you are trying to allocate with that Bitmap.
Refer to http://social.msdn.microsoft.com/Forums/en-US/netfxbcl/thread/37684999-62c7-4c41-8167-745a2b486583/
.NET is likely refusing to create an image that uses up that much contiguous memory all at once.
Slightly harder to read, but this reference helps as well:
Each image in the system has the amount of memory defined by this formula:
bit-depth * width * height / 8
This means that an image 40800 pixels by 4050 will require over 660
megabytes of memory.
19000 pixels square, at 32bpp, would require 11552000000 bits (1.37 GB) to store the raster in memory. That's just the raw pixel data; any additional overhead inherent in the System.Drawing.Bitmap would add to that. Going up to 20k pixels square at the same color depth would require 1.5GB just for the raw pixel memory. In a single object, you are using 3/4 of the space reserved for the entire application in a 32-bit environment. A 64-bit environment has looser limits (usually), but you're still using 3/4 of the max size of a single object.
Why do you need such a colossal image size? Viewed at 1280x1024 res on a computer monitor, an image 19000 pixels on a side would be 14 screens wide by 18 screens tall. I can only imagine you're doing high-quality print graphics, in which case a 720dpi image would be a 26" square poster.
Set the PixelFormat when you new a bitmap, like:
new Bitmap(2000, 40000,PixelFormat.Format16bppRgb555)
and with the exact number above, it works for me. This may partly solve the problem.
I suspect you're hitting memory cap issues. However, there are many reasons a bitmap constructor can fail. The main reasons are GDI+ limits in CreateBitmap. System.Drawing.Bitmap, internally, uses the GDI native API when the bitmap is constructed.
That being said, a bitmap of that size is well over a GB of RAM, and it's likely that you're either hitting the scan line size limitation (64KB) or running out of memory.
Got this error when opening a TIF file. The problem was due to not able to open CMYK. Changed colorspace from RGB to CMYK and didn't get an error.
So I used taglib library to get image file size instead.
Code sample:
try
{
var image = new System.Drawing.Bitmap(filePath);
return string.Format("{0}px by {1}px", image.Width, image.Height);
}
catch (Exception)
{
try
{
TagLib.File file = TagLib.File.Create(filePath);
return string.Format("{0}px by {1}px", file.Properties.PhotoWidth, file.Properties.PhotoHeight);
}
catch (Exception)
{
return ("");
}
}

How to decrease memory usage from multiple images?

My app downloads six images from here and plays them back in a loop. I download the images in a GIF format, convert them to PNG format using .NET Image Tools, and store each one as a BitmapImage, in a List<BitmapImage>.
The code I use to add the downloaded image to the list of images is:
List<BitmapImage> images = new List<BitmapImage>();
//WebClient used for download
...
GifDecoder decoder = new GifDecoder();
ExtendedImage eim = new ExtendedImage();
decoder.Decode(eim, DOWNLOADEDIMAGESTREAM);
using (MemoryStream ms = new MemoryStream())
{
WriteableBitmap wbmp = eim.ToBitmap();
PngEncoder encoder = new PngEncoder();
encoder.Encode(eim, ms);
ms.Flush();
ms.Position = 0;
BitmapImage bmp = new BitmapImage();
bmp.SetSource(ms);
ms.Close();
images.Add(bmp);
}
e.Result.Dispose();
Each converted image is about 10- 20 KB, with a size of 600px x 550px. (The original GIF's are about 2/3 the size.)
After downloading the images, my memory usage is around 80 MB. Without downloading the images, the memory usage is around 50 MB. 30 MB Seems like a lot of memory to use for storing six images, with a total size of around 90 KB. In addition, it cuts my framerate down to about 5 or 6, which makes for performance issues when the user zooms or moves my image. (I am not currently displaying the images, just storing them in memory. The image I am using to zoom and move is a test, and was included during both of my memory measurements.)
I also wanted to increase the size of the images downloaded, but the amount of memory they already use makes this unreasonable.
Forget about how big the compressed image is. Once you create a bitmap from it, it's going to be 600x550x (3 or 4, probably, bytes per pixel). So you're looking at over 1MB for each image. In memory they're stored as uncompressed bitmaps. That doesn't account for 30MB, but if you're really concerned about the details of your memory usage, use something like SciTech's .NET Memory Profiler (trial available here: http://memprofiler.com/) and you can find out for sure where the memory is being taken up.
I'm not affiliated with SciTech. I used the profiler a few times over the past decade (including a stretch of a few years where I used it regularly on a project). I've found it to be one of the more accurate methods of determining how memory is used in .NET. Otherwise I find it's a lot of guessing with frequently wrong assumptions.
From my view point, we can work around on WP7, because the screen of mobile is small and we cannot display whole content of image as we want. We can download but instead of display original file we should reduce the width and height correct to the screen of mobile phone. Just my two cents.

Ensuring exported JPEG is less then maximum file size

I currently have an application which takes a screenshot of a presenter's desktop and then broadcasts it via a custom protocol to the viewers. In order for the images to be transfered quick enough to get a frame rate of 2 - 3 images per second, I need to ensure the image size is always less then ~ 300 KB.
I'm using C# for the presenter application, which encodes the screenshot into a JPEG via the process below. My concern is that the image quality can vary greatly when using a static compression setting. If I have the application capturing my screen, the images output will be ~200 KB when I have Visual Studio full screen, but if I minimize my screen and have my desktop background appearing, it will be ~400 KB.
I could put the encoding process into a loop, and continuously decrease the image size until the size of the byte array is less then 300 KB, but that seems like a tedious operation. Is there any other method I could use?
Thanks in advance.
// get the screenshot
System.Drawing.Rectangle totalSize = System.Drawing.Rectangle.Empty;
//foreach (Screen s in Screen.AllScreens)
totalSize = System.Drawing.Rectangle.Union(totalSize, Screen.PrimaryScreen.Bounds);
Bitmap screenShotBitmap = new Bitmap(totalSize.Width, totalSize.Height, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
screenShotBitmap.SetResolution(96, 96);
Graphics screenShotGraphics = Graphics.FromImage(screenShotBitmap);
screenShotGraphics.CopyFromScreen(totalSize.X, totalSize.Y,
0, 0, totalSize.Size, CopyPixelOperation.SourceCopy);
screenShotGraphics.Dispose();
// image codec information
ImageCodecInfo imageCodecInfo = GetEncoderInfo("image/jpeg");
// encoder settings
System.Drawing.Imaging.Encoder encoderQuality;
System.Drawing.Imaging.Encoder encoderColor;
encoderQuality = System.Drawing.Imaging.Encoder.Quality;
encoderColor = System.Drawing.Imaging.Encoder.ColorDepth;
// compression & quality for JPEG output
Int64 quality = 40L;
// storage for exported JPEG
byte[] screenShotByteArray;
// encoder parameters
EncoderParameter encoderQualityParameter = new EncoderParameter(encoderQuality, quality);
//EncoderParameter encoderColorParameter = new EncoderParameter(encoderColor, 8L);
// encoder parameters table
EncoderParameters encoderParameters = new EncoderParameters(1);
encoderParameters.Param[0] = encoderQualityParameter;
//encoderParameters.Param[1] = encoderColorParameter;
// get the code into a memory stream
MemoryStream screenShotMemoryStream = new MemoryStream();
screenShotBitmap.Save(screenShotMemoryStream, imageCodecInfo, encoderParameters);
// convert to a byte array
screenShotByteArray = screenShotMemoryStream.GetBuffer();
// close the memory stream
screenShotMemoryStream.Close();
If you're putting things into a loop, be careful to use something similar to binary search instead of just increasing/decreasing the quality parameter by a fixed amount until the desired size is reached.
EDIT: Explaining the binary search a bit. Take the hypothetical case of a picture that compresses to quality*10000 bytes, so the optimal quality setting would be 30. Now the naive approach would be to try some fixed quality setting (f.e. 80 which would give 800,000 bytes) and then decreasing by a certain amount until 300000 bytes are reached. If you f.e. decrease image quality by 5 in each steps, you'd try 12 quality settings with this method until you found the desired setting. A binary search would give a result faster, like this:
Quality Size Next step
80 800000 Too big, so quality := quality/2
40 400000 Too big, so quality := quality/2
20 200000 Too small, so quality := (40+20)/2
30 300000 Reached desired size
This gives the result after only 4 tries (or 3 depending on 200000 bytes being too small or just fine for you). As size doesn't have a linear relation to quality, this example is a bit unrealistic, but binary search should still give you better results than the naive approach.
You could also use some typical images for "training". Encode them using different quality settings (f.e. 100,90,...,20,10) and see how big they get relative to their original size. This might give a good first estimate in most cases although you will still have to adjust when encountering images with much more or less details in them.
Alternatively, have a look at JPEG2000 encoders, those have the option to set a filesize instead of quality.
EDIT: I don't know of JPEG2000 encoding libaries for C#, there only seem to be decoders floating around, so this could get more complicated than I thought at first. You might give CSJ2K a try, but the description doesn't sound like it's ready-to-use.

How to load a specific patch/rectangle from an image?

We have an application that show a large image file (satellite image) from local network resource.
To speed up the image rendering, we divide the image to smaller patches (e.g. 6x6 cm) and the app tiles them appropriately.
But each time the satellite image updated, the dividing pre-process should be done, which is a time consuming work.
I wonder how can we load the patches from the original file?
PS 1: I find the LeadTools library, but we need an open source solution.
PS 2: The app is in .NET C#
Edit 1:
The format is not a point for us, but currently it's JPG.
changing the format to a another could be consider, but BMP format is hardly acceptable, because of it large volume.
I wote a beautifull attempt of answer to your question, but my browser ate it... :(
Basically what I tried to say was:
1.- Since Jpeg (and most compression formats) uses a secuential compression, you'll always need to decode all the bits that are before the ones that you need.
2.- The solution I propose need to be done with each format you need to support.
3.- There are a lot of open source jpeg decoders that you could modify. Jpeg decoders need to decode blocks of bits (of variable size) that convert into pixel blocks of size 8x8. What you could do is modify the code to save in memory only the blocks you need and discard all the others as soon as they aren't needed any more (basically as soon as they are decoded). With those memory-saved blocks, create the image you need.
4.- Since Jpeg works with blocks of 8x8, your work could be easier if you work with patches of sizes multiples of 8 pixels.
5.- The modification done to the jpeg decoder could be used to substitute the preprocessing of the images you are doing if you save the patch and discard the blocks as soon as you complete them. It would be really fast and less memory consuming.
I know it needs a lot of work and there are a lot of details to be taken in consideration (specially if you work with color images), but if you need performance I belive you will always end fighting or playing (as you want to see it) with the bytes.
Hope it helps.
I'm not 100% sure what you're after but if you're looking for a way to go from string imagePath, Rectangle desiredPortion to a System.Drawing.Image object then perhaps something like this:
public System.Drawing.Image LoadImagePiece(string imagePath, Rectangle desiredPortion)
{
using (Image img = Image.FromFile(path))
{
Bitmap result = new Bitmap(desiredPortion.Width, desiredPortion.Height, PixelFormat.Format24bppRgb);
using (Graphics g = Graphics.FromImage((Image)result))
{
g.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
g.SmoothingMode = System.Drawing.Drawing2D.SmoothingMode.HighQuality;
g.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality;
g.CompositingQuality = System.Drawing.Drawing2D.CompositingQuality.HighQuality;
g.DrawImage(img, 0, 0, desiredPortion, GraphicsUnit.Pixel);
}
return result;
}
}
Note that for performance reasons you may want to consider building multiple output images at once rather than calling this multiple times - perhaps passing it an array of rectangles and getting back an array of images or similar.
If that's not what you're after can you clarify what you're actually looking for?

Copy one Bitmap onto a larger Bitmap using without using Graphics.DrawImage

This is a follow up from Rendering to a single Bitmap object from multiple threads
What im trying to achieve is to take a bitmap of say 50x50 pixels and draw it onto a larger bitmap(100x100 pixels) at any point on the larger image, using the bitmaps LockBits function or any other but NOT graphics.DrawImage. My reasons for not wanting to use DrawImage is stated in the other thread.
I have managed to get something by using Marshal.Copy from the source BitmapData to the dest BitmapData but its creating a tiled, stretched image horizontally.
You could manipulate the image in memory without relying on any system calls. If you dig into the underlying format of a .BMP file you could build your own Device Independant Bitmap class that truly "understands" the low level format of a .BMP.
For example a 8 bit per pixel image is essentially a 2 dimensional array of bytes (each byte is 1 pixel) plus a simple color table. Roughly speaking (and this is very very rough):
byte[,] bColors = new byte[3,256]; // 256 RGB colors
byte[,] bImage = new byte[25,50]; // 25 x 50 pixels
The trick is (as always) getting a hold of the raw pixel data, doing the processing, and then updating the raw pixel data with your changes.
In the past I've approached this by converting a GDI HBITMAP into a 24bpp DIB, doing my funky image processing on the raw pixels (3 bytes per pixels makes this easier), then converting the DIB back into a HBITMAP. This was all using just classic GDI (pre GDI+ even, let alone C#).
Using that approach you could design a control structure to allow multiple writers to different sections of your much bigger image.
However... the lowlevel BitBlt GDI calls are likely to be way more efficient that anything you can do. If I were you I'd make certain that just doing 50 or 100 bitblt's in a row would be too slow (you'd likely need to do this in c++).
The most annoying challenges with dealing with DIB's are:
Converting a DIB to an actual "image" ready for display and
Converting an actual "image" into a DIB
Saving a DIB as something other than a .BMP
Core references when I started learning the "horror" that images actually are:
http://msdn.microsoft.com/en-us/library/dd183562(VS.85).aspx
http://msdn.microsoft.com/en-us/library/dd144879(VS.85).aspx
http://msdn.microsoft.com/en-us/library/dd162973(VS.85).aspx
How you go about getting to/from .NET Image's... well... that's a good question :)
This should work just fine using LockBits/BitmapData, if you are using a 32bpp [P]ARGB pixel format. The trick is that you will have to copy the data one row at a time so that it aligns in the correct places. You should be able to do this using something like:
Rectangle srcArea = new Rectangle(0, 0, srcBitmap.Width, srcBitmap.Height);
BitmapData srcData = srcBitmap.LockBits(srcArea, ImageLockMode.ReadOnly, destBitmap.PixelFormat);
Rectangle destArea = new Rectangle(25, 25, srcBitmap.Width, srcBitmap.Height);
BitmapData destData = destBitmap.LockBits(destArea, ImageLockMode.WriteOnly, destBitmap.PixelFormat);
IntPtr srcPtr = srcData.Scan0;
IntPtr destPtr = destData.Scan0;
byte[] buffer = new byte[srcData.Stride];
for (int i = 0; i < srcData.Height; ++i)
{
Marshal.Copy(srcPtr, buffer, 0, buffer.Length);
Marshal.Copy(buffer, 0, destPtr, buffer.Length);
srcPtr += srcData.Stride;
destPtr += destData.Stride;
}
srcBitmap.UnlockBits(srcData);
destBitmap.UnlockBits(destData);
As a warning, this code won't work as is because I am not sure what the right incantations are for incrementing IntPtr's. I've done this same type of thing before, but in C++. Also, I don't know if there is a way to directly copy the data instead of using an intermediate buffer.
An additional caveat: the LockBits call srcBitmap and the sizing of the buffer assume that srcBitmap will be completely enclosed in destBitmap. If this is not the case (some part of the bitmap will be cropped off) the area locked and the size of the buffer will need to be adjusted.
If you are not using a 32bpp pixel format (ie 24bpp), it will be more difficult. The stride of your source BitmapData may include some amount of padding that should not be copied. You could work around this by calculating the amount of actual pixel data in a source row, and copy this amount. Indexed pixel formats would be even more work.
I would recommend taking a look at the internal bitmap memory structure.
The best approach, I think, would be to not try to set the BitmapData directly. Instead, I would make a single, shared byte array of the appropriate size, and set the byte array directly from your smaller images.
When you compose your larger image, you can take the final byte array and directly make a Bitmap from the byte data.
This has the advantage of allowing you to control the memory management, thread the operations, etc, as you seemed to want to do in your original post. It should be very fast for the data access, as well.

Categories

Resources