I have a function to check if an image is just one color.
bool r = true;
Color checkColor = image.GetPixel(0, 0);
for (int x = 0; x < image.Width; x++)
{
for (int y = 0; y < image.Height; y++)
{
if (image.GetPixel(x, y) != checkColor) { r = false; }
}
}
// image color
clrOut = checkColor;
return r;
But this algorithm is slow for big images.
Does anyone knows a way to do this using Pixel Shaders and GPU?
You don't need pixel shaders and a GPU to speed this up. Use LockBits. Bob Powell has a good tutorial on doing exactly what you want.
Also looking at your code, try reversing the for loops it gives better memory access
for( y...
...
for ( x...
The next step is to unroll some of the pixels access. Try fetching 4 or 8 pixels in the inter loop
for( y...
...
for ( x = 0; x < image.Width ; x +=4 )
pixel0 = image.getpixel(x,Y)
pixel1 = image.getpixel(x +1,Y)
pixel2 = image.getpixel(x +2,Y)
pixel3 = image.getpixel(x +3,Y)
if ( pixel0 ....
As stated earlier using Bitmap Unlock allows you access pixels via pointers, which is the fastest you can get on a CPU. You can apply loop ordering and pixel unrolling to that technique too.
If this isn't fast enough then there is choice between; C# multi-threading or GPU with OpenCL and its C# binding.
This code is slow, because you use GetPixel. You can make it much faster by using direct pointer access. Only if that's not enough, I'd look into pixel shaders.
I've written some helper libraries: https://github.com/CodesInChaos/ChaosUtil/tree/master/Chaos.Image
In particular the Pixels and the RawColor types should be useful.
If you're only looking for an image with large areas of variation versus one with the same color, you could shrink it down to a 1x1 pixel using Bilinear filtering and read pixel 0,0.
If the the pixel is VERY different from what you expect (RGB distance versus a tolerance), you can be sure that there was some variation in the original image.
Of course, this depends on what you really want to do with this info so YMMV.
Related
Is there any way to optimize this:
A character is stored in a "matrix" of bytes, dimensions 9x16, for the sake of the example, let's call it character.
The bytes can be values either 1 or 0 , meaning draw foreground and draw background respectively.
The X and Y variables are integers, representing X and Y coordinates used for the SetPixel() function. BG and FG represent background and foreground colors respectively, both type of Color.
The drawing part of the algorithm itself looks like this:
for(int i=0;i<16;i++)
{
for(int j=0;j<9;j++)
{
if(character[i][j] == 1)
{
SetPixel(X,Y,BG);
}
else
{
SetPixel(X,Y,FG);
}
X++;
}
X=0;
Y++;
}
Later on, X incremented by 9 and Y is set back to 0.
The problem with this algorithm is , when it's called for drawing a string (many characters sequentially), it's extremely slow.
I'm not really sure what characters mean, however.
GetPixel internally calls LockBits to pin the memory
Ergo Its best to use LockBits once and be done with it
Always call UnlockBits
Direct pointer access using unsafe can give you a small amount of performance as well
Also (in this case) your for loops can be optimized (code wise) to include your other indexes.
Exmaple
protected unsafe void DoStuff(string path)
{
...
using (var b = new Bitmap(path))
{
var r = new Rectangle(Point.Empty, b.Size);
var data = b.LockBits(r, ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
var p = (int*)data.Scan0;
try
{
for (int i = 0; i < 16; i++, Y++)
for (int j = 0, X = 0; j < 9; j++, X++)
*(p + X + Y * r.Width) = character[i][j] == 1 ? BG : FG;
}
finally
{
b.UnlockBits(data);
}
}
}
Bitmap.LockBits
Locks a Bitmap into system memory.
Bitmap.UnlockBits
Unlocks this Bitmap from system memory.
unsafe
The unsafe keyword denotes an unsafe context, which is required for
any operation involving pointers.
Further reading
Unsafe Code and Pointers
Bitmap.GetPixel
LockBits vs Get Pixel Set Pixel - Performance
i am developing a application that save a printscreen at a regular interval, let's say 10 seconds.
In general the images are very similar, sometimes equal, so i came with the idea to create a bitmap that represents the difference between the current printscreen and the previous one.
To achieve this, i am comparing the 2 images, pixel by pixel, and when they are equal, i am setting the pixel with a Transparent Color (in the original code, i am using Bitmap.LockBits for a better performance):
for (var x = 0; x < width; y++)
for (var y = 0; y < height; y++)
{
var oldColor = lastPrint.GetPixel(x, y);
var color = currentPrint.GetPixel(x, y);
if (oldColor == color)
{
differencePrint.SetPixel(x, y, Color.Transparent);
}
}
To recover the image, i get the first printscreen and replace with the sequential bitmaps.
private void MergePrints()
{
var lastBitmap = new BitMap(firstPrint);
foreach (var print in prints.OrderBy(e => e.Date))
{
using (var difference = new Bitmap(print.Image))
{
using (var grfx = Graphics.FromImage(lastBitmap))
{
grfx.DrawImage(difference, 0, 0);
}
}
lastBitmap.Save(print.id + ".png");
}
lastBitmap.Dispose();
}
My question is: Is there a better way to generate a object that represents the difference between the 2 images, other than a new image with transparent pixels? Maybe a new class? but this class need to be persisted and of course "smaller" than a bitmap, currently i am persisting the bitmap as byte[] after comprrsing it using 7zip algorithm.
You can do so using the code in this answer
It uses LockBits and is really fast. It assumes the format to be the native ARGB pixel format.
It takes two Bitmaps as a parameter and returns the difference Bitmap.
The 3rd parameter lets you restore the original from the difference (if you store it losslessly, Png is recommended); this was written to allow faster transmission of only the difference image, because for only small differences it allows a much better compression ratio.
This sounds rather similar to your situation, right?
To answer the question directly: I can't see where you could get a better compression or a handier format than from the developers of Png.. As an added bonus you can always look at the difference for testing and immediately see the amount and the distribution of the changes..
I am currently working on a project in which I am required to write software that compares two images made up of the same area and draws a box around the differences. I wrote the program in c# .net in a few hours but soon realized it was INCREDIBLY expensive to run. Here are the steps I implemented it in.
Created a Pixel class that stores the x,y coordinates of each pixel and a PixelRectangle class that stores a list of pixels along with width,height,x and y properties.
Looped through every pixel of each image, comparing the colour of each corresponding pixels. If the colour was different I then created a new pixel object with the x,y coordinates of that pixel and added it to a pixelDifference List.
Next I wrote a method that recursively checks each pixel in the pixelDifference list to create PixelRectangle objects that only contain pixels that are directly next to each other. (Pretty sure this bad boy is causing the majority of the destruction as it gave me a stack overflow error.)
I then worked out the x,y coordinates and dimensions of the rectangle based on the pixels that were stored in the list of the PixelRectangle Object and drew a rectangle over the original image to show where the differences were.
My questions are: Am I going about this the correct way? Would a quad tree hold any value for this project? If you could give me the basic steps on how something like this is normally achieved I would be grateful. Thanks in advance.
Dave.
looks like you want to implement blob detection. my suggestion is not to reinvent the wheel and just use openCVSharp or emgu to do this. google 'blob detection' & opencv
if you want to do it yourself here my 2 cents worth:
first of all, let's clarify what you want to do. really two separate things:
compute the difference between two images (i am assuming they are
the same dimensions)
draw a box around 'areas' that are 'different' as measured by 1. questions here are what is an 'area' and what is considered 'different'.
my suggestion for each step:
(my assumption is both images a grey scale. if not, compute the sum of colours for each pixel to get grey value)
1) cycle through all pixels in both images and subtract them. set a threshold on the absolute difference to determine if their difference is sufficient to represent and actual change in the scene (as opposed to sensor noise etc if the images are from a camera). then store the result in a third image. 0 for no difference. 255 for a difference. if done right this should be REALLY fast. however, in C# you must use pointers to get a decent performance. here an example of how to do this (note: code not tested!!) :
/// <summary>
/// computes difference between two images and stores result in a third image
/// input images must be of same dimension and colour depth
/// </summary>
/// <param name="imageA">first image</param>
/// <param name="imageB">second image</param>
/// <param name="imageDiff">output 0 if same, 255 if different</param>
/// <param name="width">width of images</param>
/// <param name="height">height of images</param>
/// <param name="channels">number of colour channels for the input images</param>
unsafe void ComputeDiffernece(byte[] imageA, byte[] imageB, byte[] imageDiff, int width, int height, int channels, int threshold)
{
int ch = channels;
fixed (byte* piA = imageB, piB = imageB, piD = imageDiff)
{
if (ch > 1) // this a colour image (assuming for RGB ch == 3 and RGBA == 4)
{
for (int r = 0; r < height; r++)
{
byte* pA = piA + r * width * ch;
byte* pB = piB + r * width * ch;
byte* pD = piD + r * width; //this has only one channels!
for (int c = 0; c < width; c++)
{
//assuming three colour channels. if channels is larger ignore extra (as it's likely alpha)
int LA = pA[c * ch] + pA[c * ch + 1] + pA[c * ch + 2];
int LB = pB[c * ch] + pB[c * ch + 1] + pB[c * ch + 2];
if (Math.Abs(LA - LB) > threshold)
{
pD[c] = 255;
}
else
{
pD[c] = 0;
}
}
}
}
else //single grey scale channels
{
for (int r = 0; r < height; r++)
{
byte* pA = piA + r * width;
byte* pB = piB + r * width;
byte* pD = piD + r * width; //this has only one channels!
for (int c = 0; c < width; c++)
{
if (Math.Abs(pA[c] - pB[c]) > threshold)
{
pD[c] = 255;
}
else
{
pD[c] = 0;
}
}
}
}
}
}
2)
not sure what you mean by area here. several solutions depending on what you mean. from simplest to hardest.
a) colour each difference pixel red in your output
b) assuming you only have one area of difference (unlikely) compute the bounding box of all 255 pixels in your output image. this can be done using a simple max / min for both x and y positions on all 255 pixels. single pass through the image and should be very fast.
c) if you have lots of different areas that change - compute the "connected components". that is a collection of pixels that are connected to each other. of course this only works in a binary image (i.e. on or off, or 0 and 255 as in our case). you can implement this in c# and i have done this before. but i won't do this for you here. it's a bit involved. algorithms are out there. again opencv or google connected components.
once you have a list of CC's draw a box around each. done.
You're pretty much going about it the right way. Step 3 shouldn't be causing a StackOverflow exception if it's implemented correctly so I'd take a closer look at that method.
What's most likely happening is that your recursive check of each member of PixelDifference is running infinitely. Make sure you keep track of which Pixels have been checked. Once you check a Pixel it no longer needs to be considered when checking neighbouring Pixels. Before checking any neighbouring pixel make sure it hasn't already been checked itself.
As an alternative to keeping track of which Pixels have been checked you can remove an item from PixelDifference once it has been checked. Of course, this may require a change in the way you implement your algorithm since removing an element from a List while checking it can bring a whole new set of issues.
There's a much simpler way of finding the difference of two images.
So if you have two images
Image<Gray, Byte> A;
Image<Gray, Byte> B;
You can get their differences fast by
A - B
Of course, images don't store negative values so to get differences in cases where pixels in image B are greater than image A
B - A
Combining these together
(A - B) + (B - A)
This is ok, but we can do even better.
This can be evaluated using Fourier transforms.
CvInvoke.cvDFT(A.Convert<Gray, Single>().Ptr, DFTA.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1);
CvInvoke.cvDFT(B.Convert<Gray, Single>().Ptr, DFTB.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1);
CvInvoke.cvDFT((DFTB - DFTA).Convert<Gray, Single>().Ptr, AB.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1);
CvInvoke.cvDFT((DFTA - DFTB).Ptr, BA.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1);
I find that the results from this method are much better.
You can make a binary image out of this, ie: threshold the image so pixels with no change have 0 while pixels that have changes store 255.
Now as far as the second part of the problem goes, I suppose there's a simple crude solution:
Partition the image into rectangular regions. Perhaps there's no need to go as far as using quad trees. Say, an 8x8 grid... (For different results, you can experiment with different grid sizes).
Then use the convex hull function within these regions. These convex hulls can be turned into rectangles by finding the min and max x an y coordinates of their vertices.
Should be fast and simple
The image is of big size and I used getPixel and and setPixel methods to access bits but found out that it was way too slow so I went to implement lock and unlock bits but could not get my head around it. I also went through tutorials of Bob Powell but the tutorials but could not understand. So, I am asking for some help here to get GLCM from the image.
GLCM is generally a very computationally intensive algorithm. It iterates through each pixel, for each neighbor. Even C++ image processing libraries have this issue.
GLCM does however lend itself quite nicely to parallel (multi-threaded) implementations as the calculations for each reference pixel are independent.
With regards to using lock and unlock bits see the example code below. One thing to keep in mind is that the image can be padded for optimization reasons. Also, if your image has a different bit depth or multiple channels you will need to adjust the code accordingly.
BitmapData data = image.LockBits(new Rectangle(0, 0, width, height),
ImageLockMode.ReadOnly, PixelFormat.Gray8);
byte* dataPtr = (byte*)data.Scan0;
int rowPadding = data.Stride - (image.Width);
// iterate over height (rows)
for (int i = 0; i < height; i++)
{
// iterate over width (columns)
for (int j = 0; j < width; j++)
{
// pixel value
int value = dataPtr[0];
// advance to next pixel
dataPtr++;
// at the end of each column, skip extra padding
if (rowPadding > 0)
{
dataPtr += rowPadding;
}
}
image.UnlockBits(data1);
On a form I have a PictureBox, a button to load image in the picturebox and couple of more buttons to do some operations on the image loaded into the picturebox.
I load a bitmap image into the picturebox and then I want to perform some operation on pixel ranges rgb(150,150,150) to rgb(192,222,255) of the loaded image.
Is it possible to do this using SetPixel method?
Is there any way to specify a range of RGB values in C#?
Simple way would be something like this:
for (int i = 0; i < width; i++)
for (int j = 0; j < height; j++)
{
Color c = bitmap.GetPixel(i, j);
if (ColorWithinRange(c))
{
// do stuff
}
}
With ColorWithinRange defined like this:
private readonly Color _from = Color.FromRgb(150, 150, 150);
private readonly Color _to = Color.FromRgb(192, 222, 255);
bool ColorWithinRange(Color c)
{
return
(_from.R <= c.R && c.R <= _to.R) &&
(_from.G <= c.G && c.G <= _to.G) &&
(_from.B <= c.B && c.B <= _to.B);
}
For large bitmap sizes, however, GetPixel and SetPixel become very slow. So, after you have implemented your algorithm, if it feels slow, you can use the Bitmap.LockBits method to pin the bitmap (prevent GC from moving it around memory) and allow yourself fast unsafe access to individual bytes.
Loop through your picturebox ang use GetPixel to get your pixel, check if pixel rgb is in range and use SetPixel to modify pixel.
GetPixel / SetPixel approach (suggested in previous answers) should work correctly, BUT it's very slow, especially if you want to check every pixel in a large image.
If you want to use more efficient method, you can try to use unsafe code. It seems a little bit more complicated, but if you have worked with pointers before, it shouldn't be a problem.
You can find some more information about this method in other questions on StackOverflow, like: Unsafe Per Pixel access, 30ms access for 1756000 pixels or find a color in an image in c#.