Optimization of image resize process - c#

I need to load several images (up to 1000) and display it in a list (longest side of the image max. 250 px).
Since many of them are raw format images (CR2 / AWR), I cannot open them simply by
var i = new Bitmap(x);
Therefore I am opening it as BitmapImage.
Secondly, size is really an issue, that's why I have to shrink it to a kind of thumbnail.
Since I cannot read the height and width during BitmapImage init process, I am loading it first, and then shrink it.
The following code does the trick, but it is awfully slow and consumes a hell of memory.
If I would ignore the "longest" side and shrink the width (or height) inside the init part, memory consumption is reduced to about 10 %.
Can somebody help me out to optimize it?
public BitmapSource getImage(string fileName, double width, double height)
{
// Read and resize image
BitmapImage tmpImage = new BitmapImage();
tmpImage.BeginInit();
tmpImage.CacheOption = BitmapCacheOption.OnLoad;
tmpImage.UriSource = new Uri(fileName);
tmpImage.EndInit();
if (tmpImage.Width > tmpImage.Height)
{
tmpImage.DecodePixelWidth = (int)width;
}
else
{
tmpImage.DecodePixelHeight = (int)height;
}
return tmpImage;
}

Related

Show byte array image in WPF with high refresh rate

Context
I have a Basler camera that throw an event when a new image is captured.
In the event arg, I can get the image grabbed as a byte array.
I have to do computation on this image and then show it in a WPF application. The camera refresh rate is up to 40FPS.
Issue and found solution
A solution to convert a byte array to a WPF image can be found here : Convert byte array to image in wpf
This solution is great to convert only one time the byte array, however I feel like there is a lot of memory loss to do it at 40FPS. A new BitmapImage() is created every time and can't be disposed.
Would there be a better solution to display in WPF a byte array that changes up to 40 FPS ? (the way the problem is handled can be completely rethought)
Code
This solution to show the camera stream in WPF works, but the BitmapImage image = new BitmapImage(); line doesn't look good to me.
private void OnImageGrabbed(object sender, ImageGrabbedEventArgs e)
{
// Get the result
IGrabResult grabResult = e.GrabResult;
if (!grabResult.GrabSucceeded)
{
throw new Exception($"Grab error: {grabResult.ErrorCode} {grabResult.ErrorDescription}");
}
// Make process on the image
imageProcessor.Process(grabResult);
// Convert grabResult in BGR 8bit format
using Bitmap bitmap = new Bitmap(grabResult.Width, grabResult.Height, PixelFormat.Format32bppRgb);
BitmapData bmpData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadWrite, bitmap.PixelFormat);
IntPtr ptrBmp = bmpData.Scan0;
converter.Convert(ptrBmp, bmpData.Stride * bitmap.Height, grabResult);
bitmap.UnlockBits(bmpData);
// Creat the BitmapImage
BitmapImage image = new BitmapImage(); // <-- never Disposed !
using (MemoryStream memory = new MemoryStream())
{
bitmap.Save(memory, ImageFormat.Bmp);
memory.Position = 0;
image.BeginInit();
image.StreamSource = memory;
image.CacheOption = BitmapCacheOption.OnLoad;
image.EndInit();
image.Freeze();
}
LastFrame = image; // View is binded to LastFrame
}
I would suggest that you use a WriteableBitmap to display the result. This avoids the need to reallocate the UI image. If the pixel format in your source matches the one in the bitmap you can simply use WritePixels to update the image.
Note that you can only modify WriteableBitmap from the main thread, and the ImageGrabbed event will be raised on a background thread. And the grabResult will be disposed of once the event handler returns. So you will need to ask the UI thread to do the actual updating, and you will need a intermediate buffer for this. But this buffer can be pooled if needed.
An alternative might be to write your own loop, calling RetrieveResult repeatedly, this would let you dispose the grab results manually, after the UI has been updated. It might also be possible to keep a pool of WriteableBitmaps, I guess it should be safe to write to if it is not actually used by the UI.
On each frame, you are
creating a Bitmap
encoding it into a MemoryStream
creating a BitmapImage
decoding the MemoryStream into the BitmapImage
Better create a WritableBitmap once, and repeatedly call its WritePixels method.
You may still need to convert the raw buffer, since WPF does not seem to have an equivalent for PixelFormat.Format32bppRgb - or it is perhaps PixelFormats.Bgr32.
var wb = LastFrame as WriteableBitmap;
if (wb == null)
{
wb = new WriteableBitmap(
grabResult.Width, grabResult.Height,
96, 96, PixelFormats.Bgr32, null);
LastFrame = wb;
}
wb.WritePixels(...);
I am in a similar situation: Pulling live images off a camera and dumping them to the UI for "live" view. I spent a good deal of time trying to find the most efficient solution. For me, the turned out to be BitmapSource.Create. I take the raw array of bytes (plus a structure describing image characteristics like width, height, etc) and use one function call to convert it to a BitmapSource.
Now in my case, the images are greyscale, 8-bit images so if you're trying to show colors, your arguments would be different. But here's a snippet of what I do.
public class XimeaCameraImage : ICameraImage
{
public unsafe XimeaCameraImage(byte[] imageData, ImgParams imageParams)
{
Data = imageData;
var fmt = PixelFormats.Gray8;
var width = imageParams.GetWidth();
var bitsPerPixel = 8; // TODO: Get ready for other image formats
var height = imageParams.GetHeight();
var stride = (((bitsPerPixel * width) + 31) / 32) * 4;
var dpi = 96.0;
// Copy the raw, unmanaged, image data from the Sdk.Image object.
Source = BitmapSource.Create(
width,
height,
dpi,
dpi,
fmt,
BitmapPalettes.Gray256,
imageData,
stride);
Source.Freeze();
}
public byte[] Data { get; }
public BitmapSource Source { get; }
}

Insufficient buffer size using WriteableBitmap?

I am modifying the ColorBasic Kinect example in order to display an image overlaid to the video stream. So what I've done is to load an image with transparent background (now a GIF but it may change), and write to the displayed bitmap.
The error I'm getting is that the buffer I'm writing to is too small.
I cannot see what the actual error is (I'm a complete newbie in XAML/C#/Kinect), but the WriteableBitmap is 1920x1080, and the bitmap I want to copy is 200x200, so why am I getting this error? I cannot see how a transparent background could be of any harm, but I am beginning to suspect that...
Note that without the last WritePixels, the code works and I see the webcam's output. My code follows.
The overlay image:
public BitmapImage overlay = new BitmapImage(new Uri("C:\\users\\user\\desktop\\something.gif"));
The callback function that displays the Kinect's webcam (see the default example ColorBasic) with my very small modifications:
private void Reader_ColorFrameArrived(object sender, ColorFrameArrivedEventArgs e)
{
// ColorFrame is IDisposable
using (ColorFrame colorFrame = e.FrameReference.AcquireFrame())
{
if (colorFrame != null)
{
FrameDescription colorFrameDescription = colorFrame.FrameDescription;
using (KinectBuffer colorBuffer = colorFrame.LockRawImageBuffer())
{
this.colorBitmap.Lock();
// verify data and write the new color frame data to the display bitmap
if ((colorFrameDescription.Width == this.colorBitmap.PixelWidth) && (colorFrameDescription.Height == this.colorBitmap.PixelHeight))
{
colorFrame.CopyConvertedFrameDataToIntPtr(
this.colorBitmap.BackBuffer,
(uint)(colorFrameDescription.Width * colorFrameDescription.Height * 4),
ColorImageFormat.Bgra);
this.colorBitmap.AddDirtyRect(new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight));
}
if(this.overlay != null)
{
// Calculate stride of source
int stride = overlay.PixelWidth * (overlay.Format.BitsPerPixel / 8);
// Create data array to hold source pixel data
byte[] data = new byte[stride * overlay.PixelHeight];
// Copy source image pixels to the data array
overlay.CopyPixels(data, stride, 0);
this.colorBitmap.WritePixels(new Int32Rect(0, 0, overlay.PixelWidth, overlay.PixelHeight), data, stride, 0);
}
this.colorBitmap.Unlock();
}
}
}
}
Your overlay.Format.BitsPerPixel / 8 will be 1 (because it's a gif), but you're trying to copy it to something that is not a gif, probably BGRA (32 bit). Thus you got a huge difference in size (4x).
.WritePixels should take in the stride value of the destination buffer, but you past it the stride value of the overlay (this can cause weird problems as well).
And finally, even if it went 100% smooth your overlay will not actually "overlay" anything, it will replace -- since I don't see any alpha bending math in your code.
Switch your .gif to a .png (32bit) and see if that helps.
Also, if you're looking for an AlphaBltMerge type code: I wrote the entire thing here.. it's very easy to understand.
Merge 2 - 32bit Images with Alpha Channels

Is there a way to resize an image using GPU?

Is there a way to resize an image using GPU (graphic card) that is consumable through a .NET application?
I am looking for an extremely performant way to resize images and have heard that the GPU could do it much quicker than CPU (GDI+ using C#). Are there known implementations or sample code using the GPU to resize images that I could consume in .NET?
Have you thought about using XNA to resize your images? Here you can find out how to use XNA to save image as a png/jpeg to a MemoryStream and later reuse it a Bitmap object:
EDIT: I will post an example here (taken from the link above) on how you can possibly use XNA.
public static Image Texture2Image(Texture2D texture)
{
Image img;
using (MemoryStream MS = new MemoryStream())
{
texture.SaveAsPng(MS, texture.Width, texture.Height);
//Go To the beginning of the stream.
MS.Seek(0, SeekOrigin.Begin);
//Create the image based on the stream.
img = Bitmap.FromStream(MS);
}
return img;
}
I also found out today that you can OpenCV to use GPU/multicore CPUs. You can for example choose to use a .NET wrapper such as Emgu and and use its Image class to manipulate with your picture and return a .NET Bitmap class:
public static Bitmap ResizeBitmap(Bitmap sourceBM, int width, int height)
{
// Initialize Emgu Image object
Image<Bgr, Byte> img = new Image<Bgr, Byte>(sourceBM);
// Resize using liniear interpolation
img.Resize(width, height, INTER.CV_INTER_LINEAR);
// Return .NET Bitmap object
return img.ToBitmap();
}
I wrote a quick spike to check performance using WPF, though I cannot for sure say that its using the GPU.
Still, see below. This scales an image to 33.5 (or whatever) times its original size.
public void Resize()
{
double scaleFactor = 33.5;
var originalFileStream = System.IO.File.OpenRead(#"D:\SkyDrive\Pictures\Random\Misc\DoIt.jpg");
var originalBitmapDecoder = JpegBitmapDecoder.Create(originalFileStream, BitmapCreateOptions.None, BitmapCacheOption.OnLoad);
BitmapFrame originalBitmapFrame = originalBitmapDecoder.Frames.First();
var originalPixelFormat = originalBitmapFrame.Format;
TransformedBitmap transformedBitmap =
new TransformedBitmap(originalBitmapFrame, new System.Windows.Media.ScaleTransform()
{
ScaleX = scaleFactor,
ScaleY = scaleFactor
});
int stride = ((transformedBitmap.PixelWidth * transformedBitmap.Format.BitsPerPixel) + 7) / 8;
int pixelCount = (stride * (transformedBitmap.PixelHeight - 1)) + stride;
byte[] buffer = new byte[pixelCount];
transformedBitmap.CopyPixels(buffer, stride, 0);
WriteableBitmap transformedWriteableBitmap = new WriteableBitmap(transformedBitmap.PixelWidth, transformedBitmap.PixelHeight, transformedBitmap.DpiX, transformedBitmap.DpiY, transformedBitmap.Format, transformedBitmap.Palette);
transformedWriteableBitmap.WritePixels(new Int32Rect(0, 0, transformedBitmap.PixelWidth, transformedBitmap.PixelHeight), buffer, stride, 0);
BitmapFrame transformedFrame = BitmapFrame.Create(transformedWriteableBitmap);
var jpegEncoder = new JpegBitmapEncoder();
jpegEncoder.Frames.Add(transformedFrame);
using (var outputFileStream = System.IO.File.OpenWrite(#"C:\DATA\Scrap\WPF.jpg"))
{
jpegEncoder.Save(outputFileStream);
}
}
The image I was testing was 495 x 360. It resized it to over 16k x 12k in a couple of seconds, including save out.
It resizes to 1.5x around 165 times a second in a single-core run. On an i7 and the GPU seemingly doing nothing, CPU at 20% I'd expect to get 5x more when multithreaded.
Performance profiling shows a hot path to wpfgfx_v0400.dll which is the native WPF graphics library and is close to DirectX (look-up 'milcore' in Google).
So it might be accelerated, I don't know.
Luke
Yes, it is possible to use GPU to resize your images. This can be done using DirectX Surfaces (for example using SlimDx in C#). You should create a surface and move your image to it, and then you can stretch this surface to another target surface of your desired size using only GPU, and finally get back the resized image from the target surface. In these scenario, pixel format of the surfaces can be different and the GPU automatically handles it. But here there are things that can affect the performance of this operation. Moving data between GPU and CPU is a time consuming process. You can apply some techniques to boost performance based on your situation, and avoiding extra data transfer between CPU and GPU memory.

Memory leak while asynchronously loading BitmapSource images

I have a fair few images that I'm loading into a ListBox in my WPF application. Originally I was using GDI to resize the images (the originals take up far too much memory). That was fine, except they were taking about 400ms per image. Not so fine. So in search of another solution I found a method that uses TransformedBitmap (which inherits from BitmapSource). That's great, I thought, I can use that. Except I'm now getting memory leaks somewhere...
I'm loading the images asynchronously using a BackgroundWorker like so:
BitmapSource bs = ImageUtils.ResizeBitmapSource(ImageUtils.GetImageSource(photo.FullName));
//BitmapSource bs = ImageUtils.GetImageSource(photo.FullName);
bs.Freeze();
this.dispatcher.Invoke(new Action(() => { photo.Source = bs; }));
GetImageSource just gets the Bitmap from the path and then converts to BitmapSource.
Here's the code snippet for ResizeBitmapSource:
const int thumbnailSize = 200;
int width;
int height;
if (bs.Width > bs.Height)
{
width = thumbnailSize;
height = (int)(bs.Height * thumbnailSize / bs.Width);
}
else
{
height = thumbnailSize;
width = (int)(bs.Width * thumbnailSize / bs.Height);
}
BitmapSource tbBitmap = new TransformedBitmap(bs,
new ScaleTransform(width / bs.Width,
height / bs.Height, 0, 0));
return tbBitmap;
That code is essentially the code from:
http://rongchaua.net/blog/c-wpf-fast-image-resize/
Any ideas what could be causing the leak?
edit:
Here's the code for GetImageSource, as requested
using (var stream = new FileStream(path, FileMode.Open, FileAccess.Read))
{
using (var bmp = Image.FromStream(stream, false, false))
{
// Use WPF to resize
var bitmapSource = ConvertBitmapToBitmapSource(bmp);
bitmapSource = ResizeBitmapSource(bitmapSource);
return bitmapSource;
}
}
I think you misunderstood how the TransformedBitmap works. It holds onto a reference to the source bitmap, and transforms it in memory. Maybe you could encode the transformed bitmap into a memory stream, and read it right back out. I'm not sure how fast this would be, but you wouldn't then be holding on to the full sized bitmap.
I found this blog post that returned a WriteableBitmap with the TransformedBitmap as the source. The WriteableBitmap will copy the pixel data to a memory buffer in the initializer, so it doesn't actually hold on to a reference to the TransformedBitmap, or the full sized image.
At a guess, from looking at your code you might need to dispose of the bitmap returned by the call to ImageUtils.GetImageSource(photo.FullName).
I have also noted on the blog you pointed out that the author has added an update (11th of March) about inserting a using statement to prevent memory leaks.

Image resizing - sometimes very poor quality?

I'm resizing some images to the screen resolution of the user; if the aspect ratio is wrong, the image should be cut.
My code looks like this:
protected void ConvertToBitmap(string filename)
{
var origImg = System.Drawing.Image.FromFile(filename);
var widthDivisor = (double)origImg.Width / (double)System.Windows.Forms.Screen.PrimaryScreen.Bounds.Width;
var heightDivisor = (double)origImg.Height / (double)System.Windows.Forms.Screen.PrimaryScreen.Bounds.Height;
int newWidth, newHeight;
if (widthDivisor < heightDivisor)
{
newWidth = (int)((double)origImg.Width / widthDivisor);
newHeight = (int)((double)origImg.Height / widthDivisor);
}
else
{
newWidth = (int)((double)origImg.Width / heightDivisor);
newHeight = (int)((double)origImg.Height / heightDivisor);
}
var newImg = origImg.GetThumbnailImage(newWidth, newHeight, null, IntPtr.Zero);
newImg.Save(this.GetBitmapPath(filename), System.Drawing.Imaging.ImageFormat.Bmp);
}
In most cases, this works fine. But for some images, the result has an extremely poor quality. It looks like the would have been resized to something very small (thumbnail size) and enlarged again.. But the resolution of the image is correct. What can I do?
Example orig image:
alt text http://img523.imageshack.us/img523/1430/naturaerowoods.jpg
Example resized image:
Note: I have a WPF application but I use the WinForms function for resizing because it's easier and because I already need a reference to System.Windows.Forms for a tray icon.
Change the last two lines of your method to this:
var newImg = new Bitmap(newWidth, newHeight);
Graphics g = Graphics.FromImage(newImg);
g.DrawImage(origImg, new Rectangle(0,0,newWidth,newHeight));
newImg.Save(this.GetBitmapPath(filename), System.Drawing.Imaging.ImageFormat.Bmp);
g.Dispose();
I cannot peek into the .NET source at the moment, but most likely the problem is in the Image.GetThumbnailImage method. Even MSDN says that "it works well when the requested thumbnail image has a size of about 120 x 120 pixels, but it you request a large thumbnail image (for example, 300 x 300) from an Image that has an embedded thumbnail, there could be a noticeable loss of quality in the thumbnail image". For true resizing (i.e. not thumbnailing), you should use the Graphics.DrawImage method. You may also need to play with the Graphics.InterpolationMode to get a better quality if needed.
If you're not creating a thumbnail, using a method called GetThumbnailImage probably isn't a good idea...
For other options, have a look at this CodeProject article. In particular, it creates a new image, creates a Graphics for it and sets the interpolation mode to HighQualityBicubic and draws the original image onto the graphics. Worth a try, at least.
As indicated on MSDN, GetThumbnailImage() is not designed to do arbitrary image scaling. Anything over 120x120 should be scaled manually. Try this instead:
using(var newImg = new Bitmap(origImg, newWidth, newHeight))
{
newImg.Save(this.GetBitmapPath(filename), System.Drawing.Imaging.ImageFormat.Bmp);
}
Edit
As a point of clarification, this overload of the Bitmap constructor calls Graphics.DrawImage, though you do not have any control over the interpolation.
instead of this code:
newImg.Save(this.GetBitmapPath(filename), System.Drawing.Imaging.ImageFormat.Bmp);
use this one :
System.Drawing.Imaging.ImageCodecInfo[] info = System.Drawing.Imaging.ImageCodecInfo.GetImageEncoders();
System.Drawing.Imaging.EncoderParameters param = new System.Drawing.Imaging.EncoderParameters(1);
param.Param[0] = new System.Drawing.Imaging.EncoderParameter(System.Drawing.Imaging.Encoder.Quality, 100L);
newImg.Save(dest_img, info[1], param);
For examples, the original image is JPG and the resized image is PNG. Are you converting between formats on purpose? Switching between different lossey compression schemes can cause quality loss.
Are you increasing or decreasing the size of the image when you resize it? If you are creating a larger image from a smaller one, this sort of degradation is to be expected.
Images will definitely be degraded if you enlarge them.
Some camera's put a resized thumbnail into the file itself presumably for preview purposes on the device itself.
The GetThumbnail method actually gets this Thumbnail image which is embedded within the image file instead of getting the higher res method.
The easy solution is to trick .Net into throwing away that thumbnail information before doing your resize or other operation. like so....
img.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipX);
//removes thumbnails from digital camera shots
img.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipX);
If you are attempting to resize constraining proportions I wrote an extension method on System.Drawing.Image that you might find handy.
/// <summary>
///
/// </summary>
/// <param name="img"></param>
/// <param name="size">Size of the constraining proportion</param>
/// <param name="constrainOnWidth"></param>
/// <returns></returns>
public static System.Drawing.Image ResizeConstrainProportions(this System.Drawing.Image img,
int size, bool constrainOnWidth, bool dontResizeIfSmaller)
{
if (dontResizeIfSmaller && (img.Width < size))
return img;
img.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipX);
img.RotateFlip(System.Drawing.RotateFlipType.Rotate180FlipX);
float ratio = 0;
ratio = (float)img.Width / (float)img.Height;
int height, width = 0;
if (constrainOnWidth)
{
height = (int)(size / ratio);
width = size;
}
else
{
width = (int)(size * ratio);
height = size;
}
return img.GetThumbnailImage(width, height, null, (new System.IntPtr(0)));
}
This is going to vary widely based on the following factors:
How closely the destination resolution matches a "natural" scale of the original resolution
The source image color depth
The image type(s) - some are more lossy than others

Categories

Resources