fast way to create a big bitmap from an array of bitmap? - c#

I have this code,
copy/paste in a new winform app and this will write a file on your desktop if you run it: test123abcd.png
Private Sub Form1_Load(sender As System.Object, e As System.EventArgs) Handles MyBase.Load
Dim SquareSize = 5
Dim GridX = 2500
Dim GridY = 2500
Dim SquareCount = GridX * GridY - 1
Dim sw As New Stopwatch
Dim Rect(4) As Rectangle
Rect(0) = New Rectangle(0, 3, 3, 1)
Rect(1) = New Rectangle(3, 0, 1, 3)
Rect(2) = New Rectangle(3, 3, 3, 1)
Rect(3) = New Rectangle(0, 0, 1, 3)
Dim fullsw = Stopwatch.StartNew
Using board = New Bitmap(SquareSize * (GridX + 1), SquareSize * (GridY + 1), Imaging.PixelFormat.Format32bppPArgb)
Using graph = Graphics.FromImage(board)
Using _board = New Bitmap(SquareSize, SquareSize, Imaging.PixelFormat.Format32bppPArgb)
Using g As Graphics = Graphics.FromImage(_board)
For i = 0 To SquareCount
g.Clear(If((i And 1) = 1, Color.Red, Color.Blue))
g.FillRectangles(Brushes.White, Rect)
sw.Start()
graph.DrawImageUnscaled(_board, ((i Mod GridX) * SquareSize), ((i \ GridY) * SquareSize))
sw.Stop()
Next
End Using
End Using
End Using
fullsw.Stop()
board.Save(Environment.GetFolderPath(Environment.SpecialFolder.DesktopDirectory) & "\test123abcd.png", Imaging.ImageFormat.Png)
End Using
MessageBox.Show("Full SW: " & fullsw.ElapsedMilliseconds & Environment.NewLine &
"DrawImageUnscaled SW: " & sw.ElapsedMilliseconds)
End Sub
about 40% to 45% of the time spent is on DrawImageUnscaled, about 23 seconds on my current computer while the whole thing take about 50 seconds
is there a way to speed up DrawImageUnscaled? (and maybe the whole thing?)
EDIT - question in vb.net, answer in c#

By assuming that the generation part (g.FillRectangles(Brushes.White, Rect), pretty time-consuming too) cannot be avoided, the best thing you can do is avoiding a second graph-generation process (also for board) and just copying the information from _board. Copying is much quicker than a new generation (as shown below), but you have the problem that the source information (_board) do not match the destination format (board by relying on .SetPixel) and thus you will have to create a function determining the current pixel (X/Y point) from the provided information (current rectangle).
Below you can see a simple code showing the time requirement differences between both approaches:
Dim SquareSize As Integer = 5
Dim _board As Bitmap = Bitmap.FromFile("in.png")
Dim board As Bitmap = New Bitmap(_board.Width * SquareSize, _board.Height * SquareSize)
For x As Integer = 0 To _board.Width - 1
For y As Integer = 0 To _board.Height - 1
board.SetPixel(x * SquareSize, y * SquareSize, _board.GetPixel(x, y))
Next
Next
board.Save("out1.png", Imaging.ImageFormat.Png)
board = New Bitmap(_board.Width, _board.Height)
Using board
Using graph = Graphics.FromImage(board)
Using _board
Using g As Graphics = Graphics.FromImage(_board)
For x As Integer = 0 To _board.Width - 1
For y As Integer = 0 To _board.Height - 1
graph.DrawImageUnscaled(_board, x, y)
Next
Next
End Using
End Using
End Using
board.Save("out2.png", Imaging.ImageFormat.Png)
End Using
Bear in mind that it is not a "properly-working code". Its whole point is showing how to copy pixels between bitmaps (by multiplying by a factor, just to get different outputs than inputs); and putting the DrawImageUnscaled method under equivalent conditions (although the output picture is, logically, different) to get a good feeling of the differences in time requirements between both methodologies.
As said via comment, this is all what I can do under the current conditions. I hope that will be enough to help you find the best solution.

wow I like unsafe code when it is worthed, solved my problem with c# in the end
here is the code, which is about 70x faster to the code in my question
using System;
using System.Drawing;
using System.Drawing.Imaging;
namespace BmpFile
{
public class BmpTest
{
private const int PixelSize = 4;
public static long Test(int GridX, int GridY, int SquareSize, Rectangle[][] Rect)
{
Bitmap bmp = new Bitmap(GridX * SquareSize, GridY * SquareSize, PixelFormat.Format32bppArgb);
BitmapData bmd = bmp.LockBits(new Rectangle(0, 0, bmp.Width, bmp.Height),
System.Drawing.Imaging.ImageLockMode.ReadWrite,
bmp.PixelFormat);
int Stride = bmd.Stride;
int Height = bmd.Height;
int Width = bmd.Width;
int RectFirst = Rect.GetUpperBound(0);
int RectSecond;
int Offset1, Offset2, Offset3;
int i, j, k, l, w, h;
int FullRow = SquareSize * Stride;
int FullSquare = SquareSize * PixelSize;
var sw = System.Diagnostics.Stopwatch.StartNew();
unsafe
{
byte* row = (byte*)bmd.Scan0;
//draw all rectangles
for (i = 0; i <= RectFirst; ++i)
{
Offset1 = ((i / GridX) * FullRow) + ((i % GridX) * FullSquare) + 3;
RectSecond = Rect[i].GetUpperBound(0);
for (j = 0; j <= RectSecond; ++j)
{
Offset2 = Rect[i][j].X * PixelSize + Rect[i][j].Y * Stride;
w=Rect[i][j].Width;
h=Rect[i][j].Height;
for (k = 0; k <= w; ++k)
{
Offset3 = k * PixelSize;
for (l = 0; l <= h; ++l)
{
row[Offset1 + Offset2 + Offset3 + (l * Stride)] = 255;
}
}
}
}
//invert color
for (int y = 0; y < Height; y++)
{
Offset1 = (y * Stride) + 3;
for (int x = 0; x < Width; x++)
{
if (row[Offset1 + x * PixelSize] == 255)
{
row[Offset1 + x * PixelSize] = 0;
}
else
{
row[Offset1 + x * PixelSize] = 255;
}
}
}
}
sw.Stop();
bmp.UnlockBits(bmd);
bmp.Save(Environment.GetFolderPath(Environment.SpecialFolder.Desktop) + #"\test.png", ImageFormat.Png);
bmp.Dispose();
return sw.ElapsedMilliseconds;
}
}
}

Related

distance/similarity between 2 bitmap images in C#

I came across the following code taken from here:
using System;
using System.Drawing;
class Program
{
static void Main()
{
Bitmap img1 = new Bitmap("Lenna50.jpg");
Bitmap img2 = new Bitmap("Lenna100.jpg");
if (img1.Size != img2.Size)
{
Console.Error.WriteLine("Images are of different sizes");
return;
}
float diff = 0;
for (int y = 0; y < img1.Height; y++)
{
for (int x = 0; x < img1.Width; x++)
{
diff += (float)Math.Abs(img1.GetPixel(x, y).R - img2.GetPixel(x, y).R) / 255;
diff += (float)Math.Abs(img1.GetPixel(x, y).G - img2.GetPixel(x, y).G) / 255;
diff += (float)Math.Abs(img1.GetPixel(x, y).B - img2.GetPixel(x, y).B) / 255;
}
}
Console.WriteLine("diff: {0} %", 100 * diff / (img1.Width * img1.Height * 3));
}
}
Unfortunately, this is really slow. Is anyone aware of a faster way of calculating the distance between 2 images? Thanks!
To provide some more context as well. I am working on something like this:
https://rogerjohansson.blog/2008/12/07/genetic-programming-evolution-of-mona-lisa/
I evolve SVGs which are then translated into a Bitmap and compared to the target image.
Just came across the aforgenet library - see for example:
enter link description here
PS:
I start to rewrite the above using LockBits. The code below is my current attempt but I am a bit stuck. Please note, that bmp1 is the 'target picture' and does not really change - hence the copying can be factored out/only needs to be done once. The Bitmap bmp2 is passed in and to be compared with bmp1 (there are 100s of bmp2s). Ultimately, I would like to determine the similarity between bmp1 and bmp2 using some distance (e.g. Euclidean distance of bytes?). Any pointers regarding this and how to speed the code up would be very much appreciated. Thanks!
public double Compare(Bitmap bmp1, Bitmap bmp2)
{
BitmapData bitmapData1 = bmp1.LockBits(new Rectangle(0, 0, bmp1.Width, bmp1.Height), ImageLockMode.ReadWrite, bmp1.PixelFormat);
BitmapData bitmapData2 = bmp2.LockBits(new Rectangle(0, 0, bmp2.Width, bmp2.Height), ImageLockMode.ReadWrite, bmp2.PixelFormat);
IntPtr ptr1 = bitmapData1.Scan0;
int bytes1 = bitmapData1.Stride * bmp1.Height;
byte[] rgbValues1 = new byte[bytes1];
byte[] r1 = new byte[bytes1 / 3];
byte[] g1 = new byte[bytes1 / 3];
byte[] b1 = new byte[bytes1 / 3];
Marshal.Copy(ptr1, rgbValues1, 0, bytes1);
bmp1.UnlockBits(bitmapData1);
IntPtr ptr2 = bitmapData2.Scan0;
int bytes2 = bitmapData2.Stride * bmp2.Height;
byte[] rgbValues2 = new byte[bytes2];
byte[] r2 = new byte[bytes2 / 3];
byte[] g2 = new byte[bytes2 / 3];
byte[] b2 = new byte[bytes2 / 3];
Marshal.Copy(ptr2, rgbValues2, 0, bytes2);
bmp2.UnlockBits(bitmapData2);
int stride = bitmapData1.Stride;
for (int column = 0; column < bitmapData1.Height; column++)
{
for (int row = 0; row < bitmapData1.Width; row++)
{
//????
}
}
return 0;
}
PPS:
I (think I) made some more progress. The following code seems to work:
using System.Drawing;
using System.Drawing.Imaging;
using Color = System.Drawing.Color;
namespace ClassLibrary1
{
public unsafe class BitmapComparer : IBitmapComparer
{
private readonly Color[] _targetBitmapColors;
private readonly int _width;
private readonly int _height;
private readonly int _maxPointerLength;
private readonly PixelFormat _pixelFormat;
public BitmapComparer(Bitmap targetBitmap)
{
_width = targetBitmap.Width;
_height = targetBitmap.Height;
_maxPointerLength = _width * _height;
_pixelFormat = targetBitmap.PixelFormat;
_targetBitmapColors = new Color[_maxPointerLength];
var bData = targetBitmap.LockBits(new Rectangle(0, 0, _width, _height), ImageLockMode.ReadWrite, _pixelFormat);
var scan0 = (byte*) bData.Scan0.ToPointer();
for (var i = 0; i < _maxPointerLength; i += 4)
{
_targetBitmapColors[i] = Color.FromArgb(scan0[i + 2], scan0[i + 1], scan0[i + 0]);
}
targetBitmap.UnlockBits(bData);
}
// https://rogerjohansson.blog/2008/12/09/genetic-programming-mona-lisa-faq/
public double Compare(Bitmap bitmapToCompareWith)
{
var bData = bitmapToCompareWith.LockBits(new Rectangle(0, 0, _width, _height), ImageLockMode.ReadWrite, _pixelFormat);
var scan0 = (byte*) bData.Scan0.ToPointer();
double distance = 0;
for (var i = 0; i < _maxPointerLength; i += 4)
{
distance +=
( ((_targetBitmapColors[i].R - scan0[i + 2]) * (_targetBitmapColors[i].R - scan0[i + 2]))
+ ((_targetBitmapColors[i].G - scan0[i + 1]) * (_targetBitmapColors[i].G - scan0[i + 1]))
+ ((_targetBitmapColors[i].B - scan0[i + 0]) * (_targetBitmapColors[i].B - scan0[i + 0])));
}
bitmapToCompareWith.UnlockBits(bData);
return distance;
}
}
}
Using all pixels always will be time-consuming. What if you use a randomly selected pixels sample of the images. Also, you can apply hierarchical image granularity. In this way, you will get more information about the details presented in the images.
I am also working on a similar project. It is available in GitHub under the name Ellipses-Image-Approximator.
Something like this:
package eu.veldsoft.ellipses.image.approximator;
import java.awt.image.BufferedImage;
import java.util.HashSet;
import java.util.Random;
import java.util.Set;
/**
* Compare to raster images by using Euclidean distance between the pixels but
* in probabilistic sampling on hierarchical image detailization.
*
* #author Todor Balabanov
*/
class HierarchicalProbabilisticImageComparator implements ImageComparator {
/** A pseudo-random number generator instance. */
private static final Random PRNG = new Random();
/**
* Euclidean distance color comparator instance.
*/
private static final ColorComparator EUCLIDEAN = new EuclideanColorComparator();
/** Recursive descent depth level. */
private int depthLevel = 1;
/**
* Size of the sample in percentages from the size of the population (from
* 0.0 to 1.0).
*/
private double samplePercent = 0.1;
/** A supportive array for the first image pixels. */
private int aPixels[] = null;
/** A supportive array for the second image pixels. */
private int bPixels[] = null;
/**
* Constructor without parameters for default members' values.
*/
public HierarchicalProbabilisticImageComparator() {
this(1, 0.1);
}
/**
* Constructor with all parameters.
*
* #param depthLevel
* Recursive descent depth level.
* #param samplePercent
* Size of the sample in percentages from the size of the
* population (from 0.0 to 1.0).
*/
public HierarchicalProbabilisticImageComparator(int depthLevel,
double samplePercent) {
super();
this.depthLevel = depthLevel;
this.samplePercent = samplePercent;
}
private double distance(int width, int level, int minX, int minY, int maxX,
int maxY) {
/*
* At the bottom of the recursive descent, distance is zero, and
* descending is canceled.
*/
if (level > depthLevel) {
return 0;
}
/* Rectangle's boundaries should be observed. */
if (maxX <= minX || maxY <= minY) {
return 0;
}
/*
* Sample size calculated according formula.
*
* https://www.surveymonkey.com/mp/sample-size-calculator/
*/
int sampleSize = (int) ((maxX - minX) * (maxY - minY) * samplePercent);
/* Generate unique indices of pixels with the size of the sample. */
Set<Integer> indices = new HashSet<Integer>();
while (indices.size() < sampleSize) {
int x = minX + PRNG.nextInt(maxX - minX + 1);
int y = minY + PRNG.nextInt(maxY - minY + 1);
indices.add(y * width + x);
}
/* The Euclidean distance of the randomly selected pixels. */
double sum = 0;
for (int index : indices) {
sum += EUCLIDEAN.distance(aPixels[index], bPixels[index]);
}
/* Do a recursive descent. */
return (sum / sampleSize) * level
+ distance(width, level + 1, minX, minY,
maxX - (maxX - minX) / 2, maxY - (maxY - minY) / 2)
+ distance(width, level + 1, maxX - (maxX - minX) / 2, minY,
maxX, maxY - (maxY - minY) / 2)
+ distance(width, level + 1, minX, maxY - (maxY - minY) / 2,
maxX - (maxX - minX) / 2, maxY)
+ distance(width, level + 1, maxX - (maxX - minX) / 2,
maxY - (maxY - minY) / 2, maxX, maxY);
}
/**
* {#inheritDoc}
*/
#Override
public double distance(BufferedImage a, BufferedImage b) {
if (a.getWidth() != b.getWidth()) {
throw new RuntimeException("Images width should be identical!");
}
if (a.getHeight() != b.getHeight()) {
throw new RuntimeException("Images height should be identical!");
}
aPixels = a.getRGB(0, 0, a.getWidth(), a.getHeight(), null, 0,
a.getWidth());
bPixels = b.getRGB(0, 0, b.getWidth(), b.getHeight(), null, 0,
b.getWidth());
/* Do a recursive calculation. */
return distance(Math.min(a.getWidth(), b.getWidth()), 1, 0, 0,
Math.min(a.getWidth() - 1, b.getWidth() - 1),
Math.min(a.getHeight() - 1, b.getHeight() - 1));
}
}
As others have pointed out, you can use BitMap.LockBits and use pointers instead of GetPixel. The following runs about 200 times faster than the original approach:
static float CalculateDifference(Bitmap bitmap1, Bitmap bitmap2)
{
if (bitmap1.Size != bitmap2.Size)
{
return -1;
}
var rectangle = new Rectangle(0, 0, bitmap1.Width, bitmap1.Height);
BitmapData bitmapData1 = bitmap1.LockBits(rectangle, ImageLockMode.ReadOnly, bitmap1.PixelFormat);
BitmapData bitmapData2 = bitmap2.LockBits(rectangle, ImageLockMode.ReadOnly, bitmap2.PixelFormat);
float diff = 0;
var byteCount = rectangle.Width * rectangle.Height * 3;
unsafe
{
// scan to first byte in bitmaps
byte* pointer1 = (byte*)bitmapData1.Scan0.ToPointer();
byte* pointer2 = (byte*)bitmapData2.Scan0.ToPointer();
for (int x = 0; x < byteCount; x++)
{
diff += (float)Math.Abs(*pointer1 - *pointer2) / 255;
pointer1++;
pointer2++;
}
}
bitmap1.UnlockBits(bitmapData1);
bitmap2.UnlockBits(bitmapData2);
return 100 * diff / byteCount;
}

IndexOutOfRangeException where there is no chance to get it

I'm writing a game in XNA, created simple method to get subImages from textures, but everytime I use it, it throws an exception. I checked the variables and there is no chance to get out of bounds. Code for this two methods below:
public Color[] GetSubImage(Color[] colorData, int width, Rectangle rec)
{
Color[] color = new Color[rec.Width * rec.Height];
for (int x = 0; x < rec.Width; x++)
{
for (int y = 0; y < rec.Height; y++)
{
color[x + y * rec.Width] = colorData[x + rec.X + (y + rec.Y) * width]; // Exception is thrown there
}
}
return color;
}
public void LoadSubImages(Texture2D sourceSpritesheet, List<Texture2D[]> destinationSprites)
{
int count = 0;
Color[] imageData = new Color[sourceSpritesheet.Width * sourceSpritesheet.Height];
Texture2D subImage;
Rectangle sourceRec;
destinationSprites = new List<Texture2D[]>();
for (int i = 0; i < this.NUMFRAMES.Length; i++)
{
Texture2D[] bi = new Texture2D[this.NUMFRAMES[i]];
for (int j = 0; j < this.NUMFRAMES[i]; j++)
{
sourceRec = new Rectangle(j * this.FRAMEWIDTHS[i], count, this.FRAMEWIDTHS[i], this.FRAMEHEIGHTS[i]);
Color[] imagePiece = this.GetSubImage(imageData, sourceSpritesheet.Width, sourceRec);
subImage = new Texture2D(Game1.Instance.GraphicsDevice, sourceRec.Width, sourceRec.Height);
subImage.SetData<Color>(imagePiece);
bi[j] = subImage;
}
destinationSprites.Add(bi);
count += this.FRAMEHEIGHTS[i];
}
}
sourceSpritesheet is 368*550 big, FRAMEWIDTHS = 46, FRAMEHEIGTHS = 50, NUMFRAMES.Length = 11 (with values between 1-8)
Is there something that I can't see?
colorData has indices starting from 0 to width * height. You're accessing indices starting from rec.X + rec.Y * width to (rec.X + width) + (height + rec.Y) * height. If rec.X or rec.Y is greater than 0 (which will happen, given how you construct your rectangles), this will go out of bounds. .NET Framework arrays are luckily working correctly, universe is safe...
colorData is 202,400 in size
In the worst case scenario :
colorData[x + rec.X + (y + rec.Y) * width];
x = 45
rec.x = 7*46 = 322
y = 50
rec.y = 11*50 = 550
width = 368
due to the order of operations your formula would execute like so:
x + rec.X + ((y + rec.Y) * width)
45 + 322 + ((50 + 550) * 368)
367 + (600 * 368)
221,167
and 221,167 is greater then colorData size of 202,400. So in conclusion it is most definitely possible to go out of bounds with your function. I would recommend you rewrite it as it seems to be a horrid case of spaghetti code.

How to scan two images for differences?

I'm trying to scan 2 images (32bppArgb format), identify when there is a difference and store the difference block's bounds in a list of rectangles.
Suppose these are the images:
second:
I want to get the different rectangle bounds (the opened directory window in our case).
This is what I've done:
private unsafe List<Rectangle> CodeImage(Bitmap bmp, Bitmap bmp2)
{
List<Rectangle> rec = new List<Rectangle>();
bmData = bmp.LockBits(new System.Drawing.Rectangle(0, 0, 1920, 1080), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp.PixelFormat);
bmData2 = bmp2.LockBits(new System.Drawing.Rectangle(0, 0, 1920, 1080), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp2.PixelFormat);
IntPtr scan0 = bmData.Scan0;
IntPtr scan02 = bmData2.Scan0;
int stride = bmData.Stride;
int stride2 = bmData2.Stride;
int nWidth = bmp.Width;
int nHeight = bmp.Height;
int minX = int.MaxValue;;
int minY = int.MaxValue;
int maxX = 0;
bool found = false;
for (int y = 0; y < nHeight; y++)
{
byte* p = (byte*)scan0.ToPointer();
p += y * stride;
byte* p2 = (byte*)scan02.ToPointer();
p2 += y * stride2;
for (int x = 0; x < nWidth; x++)
{
if (p[0] != p2[0] || p[1] != p2[1] || p[2] != p2[2] || p[3] != p2[3]) //found differences-began to store positions.
{
found = true;
if (x < minX)
minX = x;
if (x > maxX)
maxX = x;
if (y < minY)
minY = y;
}
else
{
if (found)
{
int height = getBlockHeight(stride, scan0, maxX, minY, scan02, stride2);
found = false;
Rectangle temp = new Rectangle(minX, minY, maxX - minX, height);
rec.Add(temp);
//x += minX;
y += height;
minX = int.MaxValue;
minY = int.MaxValue;
maxX = 0;
}
}
p += 4;
p2 += 4;
}
}
return rec;
}
public unsafe int getBlockHeight(int stride, IntPtr scan, int x, int y1, IntPtr scan02, int stride2) //a function to get an existing block height.
{
int height = 0;;
for (int y = y1; y < 1080; y++) //only for example- in our case its 1080 height.
{
byte* p = (byte*)scan.ToPointer();
p += (y * stride) + (x * 4); //set the pointer to a specific potential point.
byte* p2 = (byte*)scan02.ToPointer();
p2 += (y * stride2) + (x * 4); //set the pointer to a specific potential point.
if (p[0] != p2[0] || p[1] != p2[1] || p[2] != p2[2] || p[3] != p2[3]) //still change on the height in the increasing **y** of the block.
height++;
}
return height;
}
This is actually how I call the method:
Bitmap a = Image.FromFile(#"C:\Users\itapi\Desktop\1.png") as Bitmap;//generates a 32bppRgba bitmap;
Bitmap b = Image.FromFile(#"C:\Users\itapi\Desktop\2.png") as Bitmap;//
List<Rectangle> l1 = CodeImage(a, b);
int i = 0;
foreach (Rectangle rec in l1)
{
i++;
Bitmap tmp = b.Clone(rec, a.PixelFormat);
tmp.Save(i.ToString() + ".png");
}
But I'm not getting the exact rectangle.. I'm getting only half of that and sometimes even worse. I think something in the code's logic is wrong.
Code for #nico
private unsafe List<Rectangle> CodeImage(Bitmap bmp, Bitmap bmp2)
{
List<Rectangle> rec = new List<Rectangle>();
var bmData1 = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp.PixelFormat);
var bmData2 = bmp2.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp2.PixelFormat);
int bytesPerPixel = 3;
IntPtr scan01 = bmData1.Scan0;
IntPtr scan02 = bmData2.Scan0;
int stride1 = bmData1.Stride;
int stride2 = bmData2.Stride;
int nWidth = bmp.Width;
int nHeight = bmp.Height;
bool[] visited = new bool[nWidth * nHeight];
byte* base1 = (byte*)scan01.ToPointer();
byte* base2 = (byte*)scan02.ToPointer();
for (int y = 0; y < nHeight; y += 5)
{
byte* p1 = base1;
byte* p2 = base2;
for (int x = 0; x < nWidth; x += 5)
{
if (!ArePixelsEqual(p1, p2, bytesPerPixel) && !(visited[x + nWidth * y]))
{
// fill the different area
int minX = x;
int maxX = x;
int minY = y;
int maxY = y;
var pt = new Point(x, y);
Stack<Point> toBeProcessed = new Stack<Point> ();
visited[x + nWidth * y] = true;
toBeProcessed.Push(pt);
while (toBeProcessed.Count > 0)
{
var process = toBeProcessed.Pop();
var ptr1 = (byte*)scan01.ToPointer() + process.Y * stride1 + process.X * bytesPerPixel;
var ptr2 = (byte*) scan02.ToPointer() + process.Y * stride2 + process.X * bytesPerPixel;
//Check pixel equality
if (ArePixelsEqual(ptr1, ptr2, bytesPerPixel))
continue;
//This pixel is different
//Update the rectangle
if (process.X < minX) minX = process.X;
if (process.X > maxX) maxX = process.X;
if (process.Y < minY) minY = process.Y;
if (process.Y > maxY) maxY = process.Y;
Point n;
int idx;
//Put neighbors in stack
if (process.X - 1 >= 0)
{
n = new Point(process.X - 1, process.Y);
idx = n.X + nWidth * n.Y;
if (!visited[idx])
{
visited[idx] = true;
toBeProcessed.Push(n);
}
}
if (process.X + 1 < nWidth)
{
n = new Point(process.X + 1, process.Y);
idx = n.X + nWidth * n.Y;
if (!visited[idx])
{
visited[idx] = true;
toBeProcessed.Push(n);
}
}
if (process.Y - 1 >= 0)
{
n = new Point(process.X, process.Y - 1);
idx = n.X + nWidth * n.Y;
if (!visited[idx])
{
visited[idx] = true;
toBeProcessed.Push(n);
}
}
if (process.Y + 1 < nHeight)
{
n = new Point(process.X, process.Y + 1);
idx = n.X + nWidth * n.Y;
if (!visited[idx])
{
visited[idx] = true;
toBeProcessed.Push(n);
}
}
}
if (((maxX - minX + 1) > 5) & ((maxY - minY + 1) > 5))
rec.Add(new Rectangle(minX, minY, maxX - minX + 1, maxY - minY + 1));
}
p1 += 5 * bytesPerPixel;
p2 += 5 * bytesPerPixel;
}
base1 += 5 * stride1;
base2 += 5 * stride2;
}
bmp.UnlockBits(bmData1);
bmp2.UnlockBits(bmData2);
return rec;
}
I see a couple of problems with your code. If I understand it correctly, you
find a pixel that's different between the two images.
then you continue to scan from there to the right, until you find a position where both images are identical again.
then you scan from the last "different" pixel to the bottom, until you find a position where both images are identical again.
then you store that rectangle and start at the next line below it
Am I right so far?
Two obvious things can go wrong here:
If two rectangles have overlapping y-ranges, you're in trouble: You'll find the first rectangle fine, then skip to the bottom Y-coordinate, ignoring all the pixels left or right of the rectangle you just found.
Even if there is only one rectangle, you assume that every pixel on the rectangle's border is different, and all the other pixels are identical. If that assumption isn't valid, you'll stop searching too early, and only find parts of rectangles.
If your images come from a scanner or digital camera, or if they contain lossy compression (jpeg) artifacts, the second assumption will almost certainly be wrong. To illustrate this, here's what I get when I mark every identical pixel the two jpg images you linked black, and every different pixel white:
What you see is not a rectangle. Instead, a lot of pixels around the rectangles you're looking for are different:
That's because of jpeg compression artifacts. But even if you used lossless source images, pixels at the borders might not form perfect rectangles, because of antialiasing or because the background just happens to have a similar color in that region.
You could try to improve your algorithm, but if you look at that border, you will find all kinds of ugly counterexamples to any geometric assumptions you'll make.
It would probably be better to implement this "the right way". Meaning:
Either implement a flood fill algorithm that erases different pixels (e.g. by setting them to identical or by storing a flag in a separate mask), then recursively checks if the 4 neighbor pixels.
Or implement a connected component labeling algorithm, that marks each different pixel with a temporary integer label, using clever data structures to keep track which temporary labels are connected. If you're only interested in a bounding box, you don't even have to merge the temporary labels, just merge the bounding boxes of adjacent labeled areas.
Connected component labeling is in general a bit faster, but is a bit trickier to get right than flood fill.
One last advice: I would rethink your "no 3rd party libraries" policy if I were you. Even if your final product will contain no 3rd party libraries, development might by a lot faster if you used well-documented, well-tested, useful building blocks from a library, then replaced them one by one with your own code. (And who knows, you might even find an open source library with a suitable license that's so much faster than your own code that you'll stick with it in the end...)
ADD: In case you want to rethink your "no libraries" position: Here's a quick and simple implementation using AForge (which has a more permissive library than emgucv):
private static void ProcessImages()
{
(* load images *)
var img1 = AForge.Imaging.Image.FromFile(#"compare1.jpg");
var img2 = AForge.Imaging.Image.FromFile(#"compare2.jpg");
(* calculate absolute difference *)
var difference = new AForge.Imaging.Filters.ThresholdedDifference(15)
{OverlayImage = img1}
.Apply(img2);
(* create and initialize the blob counter *)
var bc = new AForge.Imaging.BlobCounter();
bc.FilterBlobs = true;
bc.MinWidth = 5;
bc.MinHeight = 5;
(* find blobs *)
bc.ProcessImage(difference);
(* draw result *)
BitmapData data = img2.LockBits(
new Rectangle(0, 0, img2.Width, img2.Height),
ImageLockMode.ReadWrite, img2.PixelFormat);
foreach (var rc in bc.GetObjectsRectangles())
AForge.Imaging.Drawing.FillRectangle(data, rc, Color.FromArgb(128,Color.Red));
img2.UnlockBits(data);
img2.Save(#"compareResult.jpg");
}
The actual difference + blob detection part (without loading and result display) takes about 43ms, for the second run (this first time takes longer of course, due to JITting, cache, etc.)
Result (the rectangle is larger due to jpeg artifacts):
Here is a flood-fill based version of your code. It checks every pixel for difference. If it finds a different pixel, it runs an exploration to find the entire different area.
The code is only meant as an illustration. There are certainly some points that could be improved.
unsafe bool ArePixelsEqual(byte* p1, byte* p2, int bytesPerPixel)
{
for (int i = 0; i < bytesPerPixel; ++i)
if (p1[i] != p2[i])
return false;
return true;
}
private static unsafe List<Rectangle> CodeImage(Bitmap bmp, Bitmap bmp2)
{
if (bmp.PixelFormat != bmp2.PixelFormat || bmp.Width != bmp2.Width || bmp.Height != bmp2.Height)
throw new ArgumentException();
List<Rectangle> rec = new List<Rectangle>();
var bmData1 = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp.PixelFormat);
var bmData2 = bmp2.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp2.PixelFormat);
int bytesPerPixel = Image.GetPixelFormatSize(bmp.PixelFormat) / 8;
IntPtr scan01 = bmData1.Scan0;
IntPtr scan02 = bmData2.Scan0;
int stride1 = bmData1.Stride;
int stride2 = bmData2.Stride;
int nWidth = bmp.Width;
int nHeight = bmp.Height;
bool[] visited = new bool[nWidth * nHeight];
byte* base1 = (byte*)scan01.ToPointer();
byte* base2 = (byte*)scan02.ToPointer();
for (int y = 0; y < nHeight; y++)
{
byte* p1 = base1;
byte* p2 = base2;
for (int x = 0; x < nWidth; ++x)
{
if (!ArePixelsEqual(p1, p2, bytesPerPixel) && !(visited[x + nWidth * y]))
{
// fill the different area
int minX = x;
int maxX = x;
int minY = y;
int maxY = y;
var pt = new Point(x, y);
Stack<Point> toBeProcessed = new Stack<Point>();
visited[x + nWidth * y] = true;
toBeProcessed.Push(pt);
while (toBeProcessed.Count > 0)
{
var process = toBeProcessed.Pop();
var ptr1 = (byte*)scan01.ToPointer() + process.Y * stride1 + process.X * bytesPerPixel;
var ptr2 = (byte*)scan02.ToPointer() + process.Y * stride2 + process.X * bytesPerPixel;
//Check pixel equality
if (ArePixelsEqual(ptr1, ptr2, bytesPerPixel))
continue;
//This pixel is different
//Update the rectangle
if (process.X < minX) minX = process.X;
if (process.X > maxX) maxX = process.X;
if (process.Y < minY) minY = process.Y;
if (process.Y > maxY) maxY = process.Y;
Point n; int idx;
//Put neighbors in stack
if (process.X - 1 >= 0)
{
n = new Point(process.X - 1, process.Y); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
if (process.X + 1 < nWidth)
{
n = new Point(process.X + 1, process.Y); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
if (process.Y - 1 >= 0)
{
n = new Point(process.X, process.Y - 1); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
if (process.Y + 1 < nHeight)
{
n = new Point(process.X, process.Y + 1); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
}
rec.Add(new Rectangle(minX, minY, maxX - minX + 1, maxY - minY + 1));
}
p1 += bytesPerPixel;
p2 += bytesPerPixel;
}
base1 += stride1;
base2 += stride2;
}
bmp.UnlockBits(bmData1);
bmp2.UnlockBits(bmData2);
return rec;
}
You can achieve this easily using a flood fill segmentation algorithm.
First an utility class to make fast bitmap access easier. This will help to encapsulate the complex pointer-logic and make the code more readable:
class BitmapWithAccess
{
public Bitmap Bitmap { get; private set; }
public System.Drawing.Imaging.BitmapData BitmapData { get; private set; }
public BitmapWithAccess(Bitmap bitmap, System.Drawing.Imaging.ImageLockMode lockMode)
{
Bitmap = bitmap;
BitmapData = bitmap.LockBits(new Rectangle(Point.Empty, bitmap.Size), lockMode, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
}
public Color GetPixel(int x, int y)
{
unsafe
{
byte* dataPointer = MovePointer((byte*)BitmapData.Scan0, x, y);
return Color.FromArgb(dataPointer[3], dataPointer[2], dataPointer[1], dataPointer[0]);
}
}
public void SetPixel(int x, int y, Color color)
{
unsafe
{
byte* dataPointer = MovePointer((byte*)BitmapData.Scan0, x, y);
dataPointer[3] = color.A;
dataPointer[2] = color.R;
dataPointer[1] = color.G;
dataPointer[0] = color.B;
}
}
public void Release()
{
Bitmap.UnlockBits(BitmapData);
BitmapData = null;
}
private unsafe byte* MovePointer(byte* pointer, int x, int y)
{
return pointer + x * 4 + y * BitmapData.Stride;
}
}
Then a class representing a rectangle containing different pixels, to mark them in the resulting image. In general this class can also contain a list of Point instances (or a byte[,] map) to make indicating individual pixels in the resulting image possible:
class Segment
{
public int Left { get; set; }
public int Top { get; set; }
public int Right { get; set; }
public int Bottom { get; set; }
public Bitmap Bitmap { get; set; }
public Segment()
{
Left = int.MaxValue;
Right = int.MinValue;
Top = int.MaxValue;
Bottom = int.MinValue;
}
};
Then the steps of a simple algorithm are as follows:
find different pixels
use a flood-fill algorithm to find segments on the difference image
draw bounding rectangles for the segments found
The first step is the easiest one:
static Bitmap FindDifferentPixels(Bitmap i1, Bitmap i2)
{
var result = new Bitmap(i1.Width, i2.Height, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
var ia1 = new BitmapWithAccess(i1, System.Drawing.Imaging.ImageLockMode.ReadOnly);
var ia2 = new BitmapWithAccess(i2, System.Drawing.Imaging.ImageLockMode.ReadOnly);
var ra = new BitmapWithAccess(result, System.Drawing.Imaging.ImageLockMode.ReadWrite);
for (int x = 0; x < i1.Width; ++x)
for (int y = 0; y < i1.Height; ++y)
{
var different = ia1.GetPixel(x, y) != ia2.GetPixel(x, y);
ra.SetPixel(x, y, different ? Color.White : Color.FromArgb(0, 0, 0, 0));
}
ia1.Release();
ia2.Release();
ra.Release();
return result;
}
And the second and the third steps are covered with the following three functions:
static List<Segment> Segmentize(Bitmap blackAndWhite)
{
var bawa = new BitmapWithAccess(blackAndWhite, System.Drawing.Imaging.ImageLockMode.ReadOnly);
var result = new List<Segment>();
HashSet<Point> queue = new HashSet<Point>();
bool[,] visitedPoints = new bool[blackAndWhite.Width, blackAndWhite.Height];
for (int x = 0;x < blackAndWhite.Width;++x)
for (int y = 0;y < blackAndWhite.Height;++y)
{
if (bawa.GetPixel(x, y).A != 0
&& !visitedPoints[x, y])
{
result.Add(BuildSegment(new Point(x, y), bawa, visitedPoints));
}
}
bawa.Release();
return result;
}
static Segment BuildSegment(Point startingPoint, BitmapWithAccess bawa, bool[,] visitedPoints)
{
var result = new Segment();
List<Point> toProcess = new List<Point>();
toProcess.Add(startingPoint);
while (toProcess.Count > 0)
{
Point p = toProcess.First();
toProcess.RemoveAt(0);
ProcessPoint(result, p, bawa, toProcess, visitedPoints);
}
return result;
}
static void ProcessPoint(Segment segment, Point point, BitmapWithAccess bawa, List<Point> toProcess, bool[,] visitedPoints)
{
for (int i = -1; i <= 1; ++i)
{
for (int j = -1; j <= 1; ++j)
{
int x = point.X + i;
int y = point.Y + j;
if (x < 0 || y < 0 || x >= bawa.Bitmap.Width || y >= bawa.Bitmap.Height)
continue;
if (bawa.GetPixel(x, y).A != 0 && !visitedPoints[x, y])
{
segment.Left = Math.Min(segment.Left, x);
segment.Right = Math.Max(segment.Right, x);
segment.Top = Math.Min(segment.Top, y);
segment.Bottom = Math.Max(segment.Bottom, y);
toProcess.Add(new Point(x, y));
visitedPoints[x, y] = true;
}
}
}
}
And the following program given your two images as arguments:
static void Main(string[] args)
{
Image ai1 = Image.FromFile(args[0]);
Image ai2 = Image.FromFile(args[1]);
Bitmap i1 = new Bitmap(ai1.Width, ai1.Height, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
Bitmap i2 = new Bitmap(ai2.Width, ai2.Height, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
using (var g1 = Graphics.FromImage(i1))
using (var g2 = Graphics.FromImage(i2))
{
g1.DrawImage(ai1, Point.Empty);
g2.DrawImage(ai2, Point.Empty);
}
var difference = FindDifferentPixels(i1, i2);
var segments = Segmentize(difference);
using (var g1 = Graphics.FromImage(i1))
{
foreach (var segment in segments)
{
g1.DrawRectangle(Pens.Red, new Rectangle(segment.Left, segment.Top, segment.Right - segment.Left, segment.Bottom - segment.Top));
}
}
i1.Save("result.png");
Console.WriteLine("Done.");
Console.ReadKey();
}
produces the following result:
As you can see there are more differences between the given images. You can filter the resulting segments with regard to their size for example to drop the small artefacts. Also there is of course much work to do in terms of error checking, design and performance.
One idea is to proceed as follows:
1) Rescale images to a smaller size (downsample)
2) Run the above algorithm on smaller images
3) Run the above algorithm on original images, but restricting yourself only to rectangles found in step 2)
This can be of course extended to a multi-level hierarchical approach (using more different image sizes, increasing accuracy with each step).
Ah an algorithm challenge. Like! :-)
There are other answers here using f.ex. floodfill that will work just fine. I just noticed that you wanted something fast, so let me propose a different idea. Unlike the other people, I haven't tested it; it shouldn't be too hard and should be quite fast, but I simply don't have the time at the moment to test it myself. If you do, please share the results. Also, note that it's not a standard algorithm, so there are probably some bugs here and there in my explanation (and no patents).
My idea is derived from the idea of mean adaptive thresholding but with a lot of important differences. I cannot find the link from wikipedia anymore or my code, so I'll do this from the top of my mind. Basically you create a new (64-bit) buffer for both images and fill it with:
f(x,y) = colorvalue + f(x-1, y) + f(x, y-1) - f(x-1, y-1)
f(x,0) = colorvalue + f(x-1, 0)
f(0,y) = colorvalue + f(0, y-1)
The main trick is that you can calculate the sum value of a portion of the image fast, namely by:
g(x1,y1,x2,y2) = f(x2,y2)-f(x1-1,y2)-f(x2,y1-1)+f(x1-1,y1-1)
In other words, this will give the same result as:
result = 0;
for (x=x1; x<=x2; ++x)
for (y=y1; y<=y2; ++y)
result += f(x,y)
In our case this means that with only 4 integer operations this will get you some unique number of the block in question. I'd say that's pretty awesome.
Now, in our case, we don't really care about the average value; we just care about some sort-of unique number. If the image changes, it should change - simple as that. As for colorvalue, usually some gray scale number is used for thresholding - instead, we'll be using the complete 24-bit RGB value. Because there are only so few compares, we can simply scan until we find a block that doesn't match.
The basic algorithm that I propose works as follows:
for (y=0; y<height;++y)
for (x=0; x<width; ++x)
if (src[x,y] != dst[x,y])
if (!IntersectsWith(x, y, foundBlocks))
FindBlock(foundBlocks);
Now, IntersectsWith can be something like a quad tree of if there are only a few blocks, you can simply iterate through the blocks and check if they are within the bounds of the block. You can also update the x variable accordingly (I would). You can even balance things by re-building the buffer for f(x,y) if you have too many blocks (more precise: merge found blocks back from dst into src, then rebuild the buffer).
FindBlocks is where it gets interesting. Using the formula for g that's now pretty easy:
int x1 = x-1; int y1 = y-1; int x2 = x; int y2 = y;
while (changes)
{
while (g(srcimage,x1-1,y1,x1,y2) == g(dstimage,x1-1,y1,x1,y2)) { --x1; }
while (g(srcimage,x1,y1-1,x1,y2) == g(dstimage,x1,y1-1,x1,y2)) { --y1; }
while (g(srcimage,x1,y1,x1+1,y2) == g(dstimage,x1,y1,x1+1,y2)) { ++x1; }
while (g(srcimage,x1,y1,x1,y2+1) == g(dstimage,x1,y1,x1,y2+1)) { ++y1; }
}
That's it. Note that the complexity of the FindBlocks algorithm is O(x + y), which is pretty awesome for finding a 2D block IMO. :-)
As I said, let me know how it turns out.

Parallel.For statement return "System.InvalidOperationException" with a Bitmap Processing

Well, I have a code to apply a Rain Bow filter in "x" image, I have to do in two ways: Sequential & parallel, my sequential code is working without problems, but the parallel section doesn't work. And I have no idea, why?.
Code
public static Bitmap RainbowFilterParallel(Bitmap bmp)
{
Bitmap temp = new Bitmap(bmp.Width, bmp.Height);
int raz = bmp.Height / 4;
Parallel.For(0, bmp.Width, i =>
{
Parallel.For(0, bmp.Height, x =>
{
if (i < (raz))
{
temp.SetPixel(i, x, Color.FromArgb(bmp.GetPixel(i, x).R / 5, bmp.GetPixel(i, x).G, bmp.GetPixel(i, x).B));
}
else if (i < (raz * 2))
{
temp.SetPixel(i, x, Color.FromArgb(bmp.GetPixel(i, x).R, bmp.GetPixel(i, x).G / 5, bmp.GetPixel(i, x).B));
}
else if (i < (raz * 3))
{
temp.SetPixel(i, x, Color.FromArgb(bmp.GetPixel(i, x).R, bmp.GetPixel(i, x).G, bmp.GetPixel(i, x).B / 5));
}
else if (i < (raz * 4))
{
temp.SetPixel(i, x, Color.FromArgb(bmp.GetPixel(i, x).R / 5, bmp.GetPixel(i, x).G, bmp.GetPixel(i, x).B / 5));
}
else
{
temp.SetPixel(i, x, Color.FromArgb(bmp.GetPixel(i, x).R / 5, bmp.GetPixel(i, x).G / 5, bmp.GetPixel(i, x).B / 5));
}
});
});
return temp;
}
Besides, In a moments the program return the same error but says "The object is already in use".
PS. I'm beginner with c#, and I Searched this topic in another post and I found nothing.
Thank you very much in advance
As commenter Ron Beyer points out, using the SetPixel() and GetPixel() methods is very slow. Each call to one of those methods involves a lot of overhead in the transition between your managed code down to the actual binary buffer that the Bitmap object represents. There are a lot of layers there, and the video driver typically gets involved which requires transitions between user and kernel level execution.
But besides being slow, these methods also make the object "busy", and so if an attempt to use the bitmap (including calling one of those methods) is made between the time one of those methods is called and when it returns (i.e. while the call is in progress), an error occurs with the exception you saw.
Since the only way that parallelizing your current code would be helpful is if these method calls could occur concurrently, and since they simply cannot, this approach isn't going to work.
On the other hand, using the LockBits() method is not only guaranteed to work, there's a very good chance that you will find the performance is so much better using LockBits() that you don't even need to parallelize the algorithm. But should you decide you do, because of the way LockBits() works — you gain access to a raw buffer of bytes that represents the bitmap image — you can easily parallelize the algorithm and take advantage of multiple CPU cores (if present).
Note that when using LockBits() you will be working with the Bitmap object at a level that you might not be accustomed to. If you are not already knowledgeable with how bitmaps really work "under the hood", you will have to familiarize yourself with the way that bitmaps are actually stored in memory. This includes understanding what the different pixel formats mean, how to interpret and modify pixels for a given format, and how a bitmap is laid out in memory (e.g. the order of rows, which can vary depending on the bitmap, as well as the "stride" of the bitmap).
These things are not terribly hard to learn, but it will require patience. It is well worth the effort though, if performance is your goal.
Parallel is hard on the singular mind. And mixing it with legacy GDI+ code can lead to strange results..
Your code has numerous issues:
You call GetPixel three times per pixel instead of once
You are accessing the pixels not horizontally as you should
You call y x and x i; the machine won't mind but us people do
You are using way too much parallelization. No use to have much more of it than you have cores. It creates overhead that is bound to eat up any gains, unless your inner loop has a really hard job to do, like millions of calculations..
But the exception you get has nothing to do with these issues. And one mistake you don't make is to access the same pixel in parallel... So why the crash?
After cleaning up the code I found that the error in the stack trace pointed to SetPixel and there to System.Drawing.Image.get_Width(). The former is obvious, the latter not part of our code..!?
So I dug into the source code at referencesource.microsoft.com and found this:
/// <include file='doc\Bitmap.uex' path='docs/doc[#for="Bitmap.SetPixel"]/*' />
/// <devdoc>
/// <para>
/// Sets the color of the specified pixel in this <see cref='System.Drawing.Bitmap'/> .
/// </para>
/// </devdoc>
public void SetPixel(int x, int y, Color color) {
if ((PixelFormat & PixelFormat.Indexed) != 0) {
throw new InvalidOperationException(SR.GetString(SR.GdiplusCannotSetPixelFromIndexedPixelFormat));
}
if (x < 0 || x >= Width) {
throw new ArgumentOutOfRangeException("x", SR.GetString(SR.ValidRangeX));
}
if (y < 0 || y >= Height) {
throw new ArgumentOutOfRangeException("y", SR.GetString(SR.ValidRangeY));
}
int status = SafeNativeMethods.Gdip.GdipBitmapSetPixel(new HandleRef(this, nativeImage), x, y, color.ToArgb());
if (status != SafeNativeMethods.Gdip.Ok)
throw SafeNativeMethods.Gdip.StatusException(status);
}
The real work is done by SafeNativeMethods.Gdip.GdipBitmapSetPixel but before that the method does a bounds check on the Bitmap's Width and Height. And while these in our case of course never change the system still won't allow accessing them in parallel and hence crashes when at some point the checks are happening interwoven. Totally uncesessary, of course, but there you go..
So GetPixel (which has the same behaviour) and SetPixel can't safely be used in a parallel processing.
Two ways out of it:
We can add locks to the code and thus make sure the checks won't happen at the 'same' time:
public static Bitmap RainbowFilterParallel(Bitmap bmp)
{
Bitmap temp = new Bitmap(bmp);
int raz = bmp.Height / 4;
int height = bmp.Height;
int width = bmp.Width;
// set a limit to parallesim
int maxCore = 7;
int blockH = height / maxCore + 1;
//lock (temp)
Parallel.For(0, maxCore, cor =>
{
//Parallel.For(0, bmp.Height, x =>
for (int yb = 0; yb < blockH; yb++)
{
int i = cor * blockH + yb;
if (i >= height) continue;
for (int x = 0; x < width; x++)
{
{
Color c;
// lock the Bitmap just for the GetPixel:
lock (temp) c = temp.GetPixel(x, i);
byte R = c.R;
byte G = c.G;
byte B = c.B;
if (i < (raz)) { R = (byte)(c.R / 5); }
else if (i < raz + raz) { G = (byte)(c.G / 5); }
else if (i < raz * 3) { B = (byte)(c.B / 5); }
else if (i < raz * 4) { R = (byte)(c.R / 5); B = (byte)(c.B / 5); }
else { G = (byte)(c.G / 5); R = (byte)(c.R / 5); }
// lock the Bitmap just for the SetPixel:
lock (temp) temp.SetPixel(x, i, Color.FromArgb(R,G,B));
};
}
};
});
return temp;
}
Note that limiting parallism is so important there is even a member in the ParallelOptions class and a parameter inParallel.For to control it! I have set the maximum core numer to 7, but this would be better:
int degreeOfParallelism = Environment.ProcessorCount - 1;
So this should save us some overhead. But still: I'd expect that to be slower than a corrected sequential method!
Instead going for a LockBits as Peter and Ron have suggested method makes things really fast (1ox) and adding parallelism potentially even faster still..
So finally to finish up this length answer, here is a Lockbits plus Limited-Parallel solution:
public static Bitmap RainbowFilterParallelLockbits(Bitmap bmp)
{
Bitmap temp = null;
temp = new Bitmap(bmp);
int raz = bmp.Height / 4;
int height = bmp.Height;
int width = bmp.Width;
Rectangle rect = new Rectangle(Point.Empty, bmp.Size);
BitmapData bmpData = temp.LockBits(rect,ImageLockMode.ReadOnly, temp.PixelFormat);
int bpp = (temp.PixelFormat == PixelFormat.Format32bppArgb) ? 4 : 3;
int size = bmpData.Stride * bmpData.Height;
byte[] data = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(bmpData.Scan0, data, 0, size);
var options = new ParallelOptions();
int maxCore = Environment.ProcessorCount - 1;
options.MaxDegreeOfParallelism = maxCore > 0 ? maxCore : 1;
Parallel.For(0, height, options, y =>
{
for (int x = 0; x < width; x++)
{
{
int index = y * bmpData.Stride + x * bpp;
if (y < (raz)) data[index + 2] = (byte) (data[index + 2] / 5);
else if (y < (raz * 2)) data[index + 1] = (byte)(data[index + 1] / 5);
else if (y < (raz * 3)) data[index ] = (byte)(data[index ] / 5);
else if (y < (raz * 4))
{ data[index + 2] = (byte)(data[index + 2] / 5);
data[index] = (byte)(data[index] / 5); }
else
{ data[index + 2] = (byte)(data[index + 2] / 5);
data[index + 1] = (byte)(data[index + 1] / 5);
data[index] = (byte)(data[index] / 5); }
};
};
});
System.Runtime.InteropServices.Marshal.Copy(data, 0, bmpData.Scan0, data.Length);
temp.UnlockBits(bmpData);
return temp;
}
While not strictly relevant I wanted to post a better faster version than any of the one's that I see given in the answers. This is the fastest way I know of to iterate through a bitmap and save the results in C#. In my work we need to go through millions of large images, this is just me grabbing the red channel and saving it for my own purposes but it should give you the idea of how to work
//Parallel Unsafe, Corrected Channel, Corrected Standard div 5x faster
private void TakeApart_Much_Faster(Bitmap processedBitmap)
{
_RedMin = byte.MaxValue;
_RedMax = byte.MinValue;
_arr = new byte[BMP.Width, BMP.Height];
long Sum = 0,
SumSq = 0;
BitmapData bitmapData = processedBitmap.LockBits(new Rectangle(0, 0, processedBitmap.Width, processedBitmap.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
//this is a much more useful datastructure than the array but it's slower to fill.
points = new ConcurrentDictionary<Point, byte>();
unsafe
{
int bytesPerPixel = Image.GetPixelFormatSize(bitmapData.PixelFormat) / 8;
int heightInPixels = bitmapData.Height;
int widthInBytes = bitmapData.Width * bytesPerPixel;
_RedMin = byte.MaxValue;
_RedMax = byte.MinValue;
byte* PtrFirstPixel = (byte*)bitmapData.Scan0;
Parallel.For(0, heightInPixels, y =>
{
//pointer to the first pixel so we don't lose track of where we are
byte* currentLine = PtrFirstPixel + (y * bitmapData.Stride);
for (int x = 0; x < widthInBytes; x = x + bytesPerPixel)
{
//0+2 is red channel
byte redPixel = currentLine[x + 2];
Interlocked.Add(ref Sum, redPixel);
Interlocked.Add(ref SumSq, redPixel * redPixel);
//divide by three since we are skipping ahead 3 at a time.
_arr[x/3, y] = redPixel;
_RedMin = redPixel < _RedMin ? _RedMin : redPixel;
_RedMax = redPixel > RedMax ? RedMax : redPixel;
}
});
_RedMean = Sum / TotalPixels;
_RedStDev = Math.Sqrt((SumSq / TotalPixels) - (_RedMean * _RedMean));
processedBitmap.UnlockBits(bitmapData);
}
}

MeasureString and DrawString difference

why do i have to increase MeasureString() result width by 21%
size.Width = size.Width * 1.21f;
to evade Word Wrap in DrawString()?
I need a solution to get the exact result.
Same font, same stringformat, same text used in both functions.
From answer by OP:
SizeF size = graphics.MeasureString(element.Currency, Currencyfont, new PointF(0, 0), strFormatLeft);
size.Width = size.Width * 1.21f;
int freespace = rect.Width - (int)size.Width;
if (freespace < ImageSize) { if (freespace > 0) ImageSize = freespace; else ImageSize = 0; }
int FlagY = y + (CurrencySize - ImageSize) / 2;
int FlagX = (freespace - ImageSize) / 2;
graphics.DrawImage(GetResourseImage(#"Flags." + element.Flag.ToUpper() + ".png"),
new Rectangle(FlagX, FlagY, ImageSize, ImageSize));
graphics.DrawString(element.Currency, Currencyfont, Brushes.Black,
new Rectangle(FlagX + ImageSize, rect.Y, (int)(size.Width), CurrencySize), strFormatLeft);
My code.
MeasureString() method had some issues, especially when drawing non-ASCII characters. Please try TextRenderer.MeasureText() instead.
Graphics.MeasureString, TextRenderer.MeasureText and Graphics.MeasureCharacterRanges
all return a size that includes blank pixels around the glyph to accomodate ascenders and descenders.
In other words, they return the height of "a" as the same as the height of "d" (ascender) or "y" (descender). If you need the true size of the glyph, the only way is to draw the string and count the pixels:
Public Shared Function MeasureStringSize(ByVal graphics As Graphics, ByVal text As String, ByVal font As Font) As SizeF
' Get initial estimate with MeasureText
Dim flags As TextFormatFlags = TextFormatFlags.Left + TextFormatFlags.NoClipping
Dim proposedSize As Size = New Size(Integer.MaxValue, Integer.MaxValue)
Dim size As Size = TextRenderer.MeasureText(graphics, text, font, proposedSize, flags)
' Create a bitmap
Dim image As New Bitmap(size.Width, size.Height)
image.SetResolution(graphics.DpiX, graphics.DpiY)
Dim strFormat As New StringFormat
strFormat.Alignment = StringAlignment.Near
strFormat.LineAlignment = StringAlignment.Near
' Draw the actual text
Dim g As Graphics = graphics.FromImage(image)
g.SmoothingMode = SmoothingMode.HighQuality
g.TextRenderingHint = Drawing.Text.TextRenderingHint.AntiAliasGridFit
g.Clear(Color.White)
g.DrawString(text, font, Brushes.Black, New PointF(0, 0), strFormat)
' Find the true boundaries of the glyph
Dim xs As Integer = 0
Dim xf As Integer = size.Width - 1
Dim ys As Integer = 0
Dim yf As Integer = size.Height - 1
' Find left margin
Do While xs < xf
For y As Integer = ys To yf
If image.GetPixel(xs, y).ToArgb <> Color.White.ToArgb Then
Exit Do
End If
Next
xs += 1
Loop
' Find right margin
Do While xf > xs
For y As Integer = ys To yf
If image.GetPixel(xf, y).ToArgb <> Color.White.ToArgb Then
Exit Do
End If
Next
xf -= 1
Loop
' Find top margin
Do While ys < yf
For x As Integer = xs To xf
If image.GetPixel(x, ys).ToArgb <> Color.White.ToArgb Then
Exit Do
End If
Next
ys += 1
Loop
' Find bottom margin
Do While yf > ys
For x As Integer = xs To xf
If image.GetPixel(x, yf).ToArgb <> Color.White.ToArgb Then
Exit Do
End If
Next
yf -= 1
Loop
Return New SizeF(xf - xs + 1, yf - ys + 1)
End Function
If it helps anyone, I transformed answer from smirkingman to C#, fixing memory bugs (using - Dispose) and outer loop breaks (no TODOs). I also used scaling on graphics (and fonts), so I added that, too (didn't work otherwise). And it returns RectangleF, because I wanted to position the text precisely (with Graphics.DrawText).
The not-perfect but good enough for my purpose source code:
static class StringMeasurer
{
private static SizeF GetScaleTransform(Matrix m)
{
/*
3x3 matrix, affine transformation (skew - used by rotation)
[ X scale, Y skew, 0 ]
[ X skew, Y scale, 0 ]
[ X translate, Y translate, 1 ]
indices (0, ...): X scale, Y skew, Y skew, X scale, X translate, Y translate
*/
return new SizeF(m.Elements[0], m.Elements[3]);
}
public static RectangleF MeasureString(Graphics graphics, Font f, string s)
{
//copy only scale, not rotate or transform
var scale = GetScaleTransform(graphics.Transform);
// Get initial estimate with MeasureText
//TextFormatFlags flags = TextFormatFlags.Left | TextFormatFlags.NoClipping;
//Size proposedSize = new Size(int.MaxValue, int.MaxValue);
//Size size = TextRenderer.MeasureText(graphics, s, f, proposedSize, flags);
SizeF sizef = graphics.MeasureString(s, f);
sizef.Width *= scale.Width;
sizef.Height *= scale.Height;
Size size = sizef.ToSize();
int xLeft = 0;
int xRight = size.Width - 1;
int yTop = 0;
int yBottom = size.Height - 1;
// Create a bitmap
using (Bitmap image = new Bitmap(size.Width, size.Height))
{
image.SetResolution(graphics.DpiX, graphics.DpiY);
StringFormat strFormat = new StringFormat();
strFormat.Alignment = StringAlignment.Near;
strFormat.LineAlignment = StringAlignment.Near;
// Draw the actual text
using (Graphics g = Graphics.FromImage(image))
{
g.SmoothingMode = graphics.SmoothingMode;
g.TextRenderingHint = graphics.TextRenderingHint;
g.Clear(Color.White);
g.ScaleTransform(scale.Width, scale.Height);
g.DrawString(s, f, Brushes.Black, new PointF(0, 0), strFormat);
}
// Find the true boundaries of the glyph
// Find left margin
for (; xLeft < xRight; xLeft++)
for (int y = yTop; y <= yBottom; y++)
if (image.GetPixel(xLeft, y).ToArgb() != Color.White.ToArgb())
goto OUTER_BREAK_LEFT;
OUTER_BREAK_LEFT: ;
// Find right margin
for (; xRight > xLeft; xRight--)
for (int y = yTop; y <= yBottom; y++)
if (image.GetPixel(xRight, y).ToArgb() != Color.White.ToArgb())
goto OUTER_BREAK_RIGHT;
OUTER_BREAK_RIGHT: ;
// Find top margin
for (; yTop < yBottom; yTop++)
for (int x = xLeft; x <= xRight; x++)
if (image.GetPixel(x, yTop).ToArgb() != Color.White.ToArgb())
goto OUTER_BREAK_TOP;
OUTER_BREAK_TOP: ;
// Find bottom margin
for (; yBottom > yTop; yBottom-- )
for (int x = xLeft; x <= xRight; x++)
if (image.GetPixel(x, yBottom).ToArgb() != Color.White.ToArgb())
goto OUTER_BREAK_BOTTOM;
OUTER_BREAK_BOTTOM: ;
}
var pt = new PointF(xLeft, yTop);
var sz = new SizeF(xRight - xLeft + 1, yBottom - yTop + 1);
return new RectangleF(pt.X / scale.Width, pt.Y / scale.Height,
sz.Width / scale.Width, sz.Height / scale.Height);
}
}
This article on codeproject gives two ways to get the exact size of characters as they are rendered by DrawString.
Personally, the most efficient way and what I recommend, has always been:
const TextFormatFlags _textFormatFlags = TextFormatFlags.NoPadding | TextFormatFlags.NoPrefix | TextFormatFlags.PreserveGraphicsClipping;
// Retrieve width
int width = TextRenderer.MeasureText(element.Currency, Currencyfont, new Size(short.MaxValue, short.MaxValue), _textFormatFlags).Width + 1;
// Retrieve height
int _tempHeight1 = TextRenderer.MeasureText("_", Currencyfont).Height;
int _tempHeight2 = (int)Math.Ceiling(Currencyfont.GetHeight());
int height = Math.Max(_tempHeight1, _tempHeight2) + 1;
You likely need to add the following the the StringFormat flags:
StringFormatFlags.FitBlackBox
try this solution: http://www.codeproject.com/Articles/2118/Bypass-Graphics-MeasureString-limitation
(found it at https://stackoverflow.com/a/11708952/908936)
code:
static public int MeasureDisplayStringWidth(Graphics graphics, string text, Font font)
{
System.Drawing.StringFormat format = new System.Drawing.StringFormat ();
System.Drawing.RectangleF rect = new System.Drawing.RectangleF(0, 0, 1000, 1000);
var ranges = new System.Drawing.CharacterRange(0, text.Length);
System.Drawing.Region[] regions = new System.Drawing.Region[1];
format.SetMeasurableCharacterRanges (new[] {ranges});
regions = graphics.MeasureCharacterRanges (text, font, rect, format);
rect = regions[0].GetBounds (graphics);
return (int)(rect.Right + 1.0f);
}

Categories

Resources