How to specify a range of RGB values in C# - c#

On a form I have a PictureBox, a button to load image in the picturebox and couple of more buttons to do some operations on the image loaded into the picturebox.
I load a bitmap image into the picturebox and then I want to perform some operation on pixel ranges rgb(150,150,150) to rgb(192,222,255) of the loaded image.
Is it possible to do this using SetPixel method?
Is there any way to specify a range of RGB values in C#?

Simple way would be something like this:
for (int i = 0; i < width; i++)
for (int j = 0; j < height; j++)
{
Color c = bitmap.GetPixel(i, j);
if (ColorWithinRange(c))
{
// do stuff
}
}
With ColorWithinRange defined like this:
private readonly Color _from = Color.FromRgb(150, 150, 150);
private readonly Color _to = Color.FromRgb(192, 222, 255);
bool ColorWithinRange(Color c)
{
return
(_from.R <= c.R && c.R <= _to.R) &&
(_from.G <= c.G && c.G <= _to.G) &&
(_from.B <= c.B && c.B <= _to.B);
}
For large bitmap sizes, however, GetPixel and SetPixel become very slow. So, after you have implemented your algorithm, if it feels slow, you can use the Bitmap.LockBits method to pin the bitmap (prevent GC from moving it around memory) and allow yourself fast unsafe access to individual bytes.

Loop through your picturebox ang use GetPixel to get your pixel, check if pixel rgb is in range and use SetPixel to modify pixel.

GetPixel / SetPixel approach (suggested in previous answers) should work correctly, BUT it's very slow, especially if you want to check every pixel in a large image.
If you want to use more efficient method, you can try to use unsafe code. It seems a little bit more complicated, but if you have worked with pointers before, it shouldn't be a problem.
You can find some more information about this method in other questions on StackOverflow, like: Unsafe Per Pixel access, 30ms access for 1756000 pixels or find a color in an image in c#.

Related

Aiming for a more accurate and precise color replacer

What I'm trying to achieve:
In my program, I want to let the user enter any image, then my program resizes it then it gets all of the pixels and checks whether their color matches the color of the pixel at (0, 0). If it does match that pixel then it's going to be replaced with a hardcoded color.
The issue I'm having
My code works when I try to resize and replace the pixel colors with new ones, however when replacing the pixel colors, it usually does not replace some of the pixels, especially those that are closest to the image itself where the colors begin to diverse.
What I tried to fix the problem
I tried declaring the HEX colors as a string and whenever the C letter came, I would tell the program to remove the pixel that string. However, I found that to be very inefficient due to how often it can replace pixels that shouldn't be replaced.
My code
public class PicGen
{
public PicGen(PictureBox pictureBox)
{
Bitmap picBitmap = new(pictureBox.Image);
Bitmap resized = new(picBitmap, new(52, 52));
Color backColorBottom = resized.GetPixel(51, 0);
for (int i = 0; i < resized.Width; i++)
{
for (int j = 0; j < resized.Height; j++)
{
if (resized.GetPixel(i, j) == backColorBottom)
resized.SetPixel(i, j, Color.FromArgb(54, 57, 63));
}
}
Clipboard.SetImage(resized);
}
Example: 1 Notice how the white around the image isn't replaced like the rest of the background? ||
Original 2

Optimizing a bitmap character drawing algorithm in C#

Is there any way to optimize this:
A character is stored in a "matrix" of bytes, dimensions 9x16, for the sake of the example, let's call it character.
The bytes can be values either 1 or 0 , meaning draw foreground and draw background respectively.
The X and Y variables are integers, representing X and Y coordinates used for the SetPixel() function. BG and FG represent background and foreground colors respectively, both type of Color.
The drawing part of the algorithm itself looks like this:
for(int i=0;i<16;i++)
{
for(int j=0;j<9;j++)
{
if(character[i][j] == 1)
{
SetPixel(X,Y,BG);
}
else
{
SetPixel(X,Y,FG);
}
X++;
}
X=0;
Y++;
}
Later on, X incremented by 9 and Y is set back to 0.
The problem with this algorithm is , when it's called for drawing a string (many characters sequentially), it's extremely slow.
I'm not really sure what characters mean, however.
GetPixel internally calls LockBits to pin the memory
Ergo Its best to use LockBits once and be done with it
Always call UnlockBits
Direct pointer access using unsafe can give you a small amount of performance as well
Also (in this case) your for loops can be optimized (code wise) to include your other indexes.
Exmaple
protected unsafe void DoStuff(string path)
{
...
using (var b = new Bitmap(path))
{
var r = new Rectangle(Point.Empty, b.Size);
var data = b.LockBits(r, ImageLockMode.ReadOnly, PixelFormat.Format32bppPArgb);
var p = (int*)data.Scan0;
try
{
for (int i = 0; i < 16; i++, Y++)
for (int j = 0, X = 0; j < 9; j++, X++)
*(p + X + Y * r.Width) = character[i][j] == 1 ? BG : FG;
}
finally
{
b.UnlockBits(data);
}
}
}
Bitmap.LockBits
Locks a Bitmap into system memory.
Bitmap.UnlockBits
Unlocks this Bitmap from system memory.
unsafe
The unsafe keyword denotes an unsafe context, which is required for
any operation involving pointers.
Further reading
Unsafe Code and Pointers
Bitmap.GetPixel
LockBits vs Get Pixel Set Pixel - Performance

Comparing images and labeling the differences c#

I am currently working on a project in which I am required to write software that compares two images made up of the same area and draws a box around the differences. I wrote the program in c# .net in a few hours but soon realized it was INCREDIBLY expensive to run. Here are the steps I implemented it in.
Created a Pixel class that stores the x,y coordinates of each pixel and a PixelRectangle class that stores a list of pixels along with width,height,x and y properties.
Looped through every pixel of each image, comparing the colour of each corresponding pixels. If the colour was different I then created a new pixel object with the x,y coordinates of that pixel and added it to a pixelDifference List.
Next I wrote a method that recursively checks each pixel in the pixelDifference list to create PixelRectangle objects that only contain pixels that are directly next to each other. (Pretty sure this bad boy is causing the majority of the destruction as it gave me a stack overflow error.)
I then worked out the x,y coordinates and dimensions of the rectangle based on the pixels that were stored in the list of the PixelRectangle Object and drew a rectangle over the original image to show where the differences were.
My questions are: Am I going about this the correct way? Would a quad tree hold any value for this project? If you could give me the basic steps on how something like this is normally achieved I would be grateful. Thanks in advance.
Dave.
looks like you want to implement blob detection. my suggestion is not to reinvent the wheel and just use openCVSharp or emgu to do this. google 'blob detection' & opencv
if you want to do it yourself here my 2 cents worth:
first of all, let's clarify what you want to do. really two separate things:
compute the difference between two images (i am assuming they are
the same dimensions)
draw a box around 'areas' that are 'different' as measured by 1. questions here are what is an 'area' and what is considered 'different'.
my suggestion for each step:
(my assumption is both images a grey scale. if not, compute the sum of colours for each pixel to get grey value)
1) cycle through all pixels in both images and subtract them. set a threshold on the absolute difference to determine if their difference is sufficient to represent and actual change in the scene (as opposed to sensor noise etc if the images are from a camera). then store the result in a third image. 0 for no difference. 255 for a difference. if done right this should be REALLY fast. however, in C# you must use pointers to get a decent performance. here an example of how to do this (note: code not tested!!) :
/// <summary>
/// computes difference between two images and stores result in a third image
/// input images must be of same dimension and colour depth
/// </summary>
/// <param name="imageA">first image</param>
/// <param name="imageB">second image</param>
/// <param name="imageDiff">output 0 if same, 255 if different</param>
/// <param name="width">width of images</param>
/// <param name="height">height of images</param>
/// <param name="channels">number of colour channels for the input images</param>
unsafe void ComputeDiffernece(byte[] imageA, byte[] imageB, byte[] imageDiff, int width, int height, int channels, int threshold)
{
int ch = channels;
fixed (byte* piA = imageB, piB = imageB, piD = imageDiff)
{
if (ch > 1) // this a colour image (assuming for RGB ch == 3 and RGBA == 4)
{
for (int r = 0; r < height; r++)
{
byte* pA = piA + r * width * ch;
byte* pB = piB + r * width * ch;
byte* pD = piD + r * width; //this has only one channels!
for (int c = 0; c < width; c++)
{
//assuming three colour channels. if channels is larger ignore extra (as it's likely alpha)
int LA = pA[c * ch] + pA[c * ch + 1] + pA[c * ch + 2];
int LB = pB[c * ch] + pB[c * ch + 1] + pB[c * ch + 2];
if (Math.Abs(LA - LB) > threshold)
{
pD[c] = 255;
}
else
{
pD[c] = 0;
}
}
}
}
else //single grey scale channels
{
for (int r = 0; r < height; r++)
{
byte* pA = piA + r * width;
byte* pB = piB + r * width;
byte* pD = piD + r * width; //this has only one channels!
for (int c = 0; c < width; c++)
{
if (Math.Abs(pA[c] - pB[c]) > threshold)
{
pD[c] = 255;
}
else
{
pD[c] = 0;
}
}
}
}
}
}
2)
not sure what you mean by area here. several solutions depending on what you mean. from simplest to hardest.
a) colour each difference pixel red in your output
b) assuming you only have one area of difference (unlikely) compute the bounding box of all 255 pixels in your output image. this can be done using a simple max / min for both x and y positions on all 255 pixels. single pass through the image and should be very fast.
c) if you have lots of different areas that change - compute the "connected components". that is a collection of pixels that are connected to each other. of course this only works in a binary image (i.e. on or off, or 0 and 255 as in our case). you can implement this in c# and i have done this before. but i won't do this for you here. it's a bit involved. algorithms are out there. again opencv or google connected components.
once you have a list of CC's draw a box around each. done.
You're pretty much going about it the right way. Step 3 shouldn't be causing a StackOverflow exception if it's implemented correctly so I'd take a closer look at that method.
What's most likely happening is that your recursive check of each member of PixelDifference is running infinitely. Make sure you keep track of which Pixels have been checked. Once you check a Pixel it no longer needs to be considered when checking neighbouring Pixels. Before checking any neighbouring pixel make sure it hasn't already been checked itself.
As an alternative to keeping track of which Pixels have been checked you can remove an item from PixelDifference once it has been checked. Of course, this may require a change in the way you implement your algorithm since removing an element from a List while checking it can bring a whole new set of issues.
There's a much simpler way of finding the difference of two images.
So if you have two images
Image<Gray, Byte> A;
Image<Gray, Byte> B;
You can get their differences fast by
A - B
Of course, images don't store negative values so to get differences in cases where pixels in image B are greater than image A
B - A
Combining these together
(A - B) + (B - A)
This is ok, but we can do even better.
This can be evaluated using Fourier transforms.
CvInvoke.cvDFT(A.Convert<Gray, Single>().Ptr, DFTA.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1);
CvInvoke.cvDFT(B.Convert<Gray, Single>().Ptr, DFTB.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, -1);
CvInvoke.cvDFT((DFTB - DFTA).Convert<Gray, Single>().Ptr, AB.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1);
CvInvoke.cvDFT((DFTA - DFTB).Ptr, BA.Ptr, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INVERSE, -1);
I find that the results from this method are much better.
You can make a binary image out of this, ie: threshold the image so pixels with no change have 0 while pixels that have changes store 255.
Now as far as the second part of the problem goes, I suppose there's a simple crude solution:
Partition the image into rectangular regions. Perhaps there's no need to go as far as using quad trees. Say, an 8x8 grid... (For different results, you can experiment with different grid sizes).
Then use the convex hull function within these regions. These convex hulls can be turned into rectangles by finding the min and max x an y coordinates of their vertices.
Should be fast and simple

Image processing C#

I have a function to check if an image is just one color.
bool r = true;
Color checkColor = image.GetPixel(0, 0);
for (int x = 0; x < image.Width; x++)
{
for (int y = 0; y < image.Height; y++)
{
if (image.GetPixel(x, y) != checkColor) { r = false; }
}
}
// image color
clrOut = checkColor;
return r;
But this algorithm is slow for big images.
Does anyone knows a way to do this using Pixel Shaders and GPU?
You don't need pixel shaders and a GPU to speed this up. Use LockBits. Bob Powell has a good tutorial on doing exactly what you want.
Also looking at your code, try reversing the for loops it gives better memory access
for( y...
...
for ( x...
The next step is to unroll some of the pixels access. Try fetching 4 or 8 pixels in the inter loop
for( y...
...
for ( x = 0; x < image.Width ; x +=4 )
pixel0 = image.getpixel(x,Y)
pixel1 = image.getpixel(x +1,Y)
pixel2 = image.getpixel(x +2,Y)
pixel3 = image.getpixel(x +3,Y)
if ( pixel0 ....
As stated earlier using Bitmap Unlock allows you access pixels via pointers, which is the fastest you can get on a CPU. You can apply loop ordering and pixel unrolling to that technique too.
If this isn't fast enough then there is choice between; C# multi-threading or GPU with OpenCL and its C# binding.
This code is slow, because you use GetPixel. You can make it much faster by using direct pointer access. Only if that's not enough, I'd look into pixel shaders.
I've written some helper libraries: https://github.com/CodesInChaos/ChaosUtil/tree/master/Chaos.Image
In particular the Pixels and the RawColor types should be useful.
If you're only looking for an image with large areas of variation versus one with the same color, you could shrink it down to a 1x1 pixel using Bilinear filtering and read pixel 0,0.
If the the pixel is VERY different from what you expect (RGB distance versus a tolerance), you can be sure that there was some variation in the original image.
Of course, this depends on what you really want to do with this info so YMMV.

c# getPixel not sellecting all pixels

I have a problem with getting the pixels from an image. I load a image, select a pixel from the image and retrieve it's color and then i generate a matrix indexMatrix[bitmap_height][bitmap_width] which contains 1 or 0 depending if the [x,y] color of the bitmap is the same as the color selected. The problem is that the program doesn't select all the pixels although it should. It only retrieves a part of them ( i am sure the pixels 'forgotten' are the same color as the selected color )
The wierd thing is that if i run my program for the new image ( the one constructed from the matrix ) it returns the same image ( as it should ) but i can't figure out how to fix the problem.
Please Help!!!
Regards,
Alex Badescu
and some code from my project :
bitmap declaration:
m_Bitmap = (Bitmap)Bitmap.FromFile(openFileDialog.FileName, false);
Here i calculate the matrix:
int bitmapWidth = m_Bitmap.Width;
int bitmapHeight = m_Bitmap.Height;
indexMatrix = new int[bitmapHeight][];
if (imageIsLoaded && colorIsSelected)
{
for (int i = 0; i < bitmapHeight; i++)
{
indexMatrix[i] = new int[bitmapWidth];
for (int j = 0; j < bitmapWidth; j++)
{
Color temp = m_Bitmap.GetPixel(j, i);
if (temp == selectedColor)
indexMatrix[i][j] = 1;
else indexMatrix[i][j] = 0;
}
}
MessageBox.Show("matrix generated succesfully");
}
matrixIsCalculated = true;
}
There is no obvious failure mode here. Other than that the pixel isn't actually a match with the color. Being off by, say, only one in the Color.B value for example. You cannot see this with the unaided eye.
These kind of very subtle color changes are quite common when the image was resized. An interpolation filter alters the color subtly, even if not strictly needed. Another failure mode is using a compressed image format like JPEG, the compression algorithm changes colors.

Categories

Resources