In my application I have loaded a picture and I want to be able to detect similar colors. So if I select a color I want the application to be able to find all pixels with that same (or almost the same) color. This is what I wrote for a detection system that looks in a vertical direction between the point of the mouse click and the end of the bitmap.
for (int y = mouseY; y < m_bitmap.Height; y++)
{
Color pixel = m_bitmap.GetPixel(mouseX, y);
//check if there is another color
if ((pixel.R > curcolor.R + treshold || pixel.R < curcolor.R - treshold) ||
(pixel.G > curcolor.G + treshold || pixel.G < curcolor.G - treshold) ||
(pixel.B > curcolor.B + treshold || pixel.B < curcolor.B - treshold))
{ //YESSSSS!
if ((y - ytop > minheight)&&(curcolor != Color.White)) //no white, at least 15px height
{
colorlayers.Add(new ColorLayer(curcolor, y - 1, ytop));
}
curcolor = pixel;
ytop = y;
}
}
Would this be the best way? Somehow it looks like it doesn't work too good with yellowish colors.
RGB is a 3D space.
A color far away threshold in all directions is not so similar to original one (and what is similar according to numbers may not be so similar to human beings eyes).
I would make a check using HSL (for example) where hue value as a finite 1D range, just for example:
for (int y = mouseY; y < m_bitmap.Height; y++)
{
Color pixel = m_bitmap.GetPixel(mouseX, y);
if (Math.Abs(color.GetHue() - curcolor.GetHue()) <= threshold)
{
// ...
}
}
Moreover please note that using bitmaps in this way (GetPixel() is terribly slow, take a look to this post to see a - much - faster alternative).
It might be interesting to look at how the magic wand tool in Paint.NET works.
This is how they compare 2 colors:
private static bool CheckColor(ColorBgra a, ColorBgra b, int tolerance)
{
int sum = 0;
int diff;
diff = a.R - b.R;
sum += (1 + diff * diff) * a.A / 256;
diff = a.G - b.G;
sum += (1 + diff * diff) * a.A / 256;
diff = a.B - b.B;
sum += (1 + diff * diff) * a.A / 256;
diff = a.A - b.A;
sum += diff * diff;
return (sum <= tolerance * tolerance * 4);
}
Source
The reason why yellow colors give a problem might be that RGB is not a perceptually uniform colorspace. This means that, given a distance between two points/colors in the colorspace, the perception of this color distance/difference will in general not be the same.
That said, you might want to use another color space, like HSL as suggested by Adriano, or perhaps Lab.
If you want to stick to RGB, I would suggest to calculate the euclidian distance, like this (I think it's simpler):
float distance = Math.sqrt((pixel.R-curcolor.R)^2 + (pixel.G-curcolor.G)^2 + (pixel.B-curcolor.B)^2);
if(distance < threshold)
{
// Do what you have to.
}
Related
I am struggling with understanding, how ColorHelper.get works in this project. Here is C# version from github.
For better understanding u will need that project from github
About project: The project is remake of THIS (in JAVA) but someone remakes this into C#. Its
a game in Monogame called Minicraft.
Problem: Because its open source, i wanna just play with that project and learn more. The main problem is, the color palette is limited that I can use sprite with only 4 Colors. That means, when I want to have colorful sprite I cant. The way how coloring works (for my opinion) is trough the ColorHelper.get function where function contains 4 ints each represent specific color. (Example HERE)
The upper (Human) is from the game and has 4 colors which are converted by values.
The bottom sprite is my sea stuff with 8 colors. With ColorHelper.get i was able to convert only 4 colors and my sprite have to be with those 4 specific colors as that Human on the top (it cant be colorfull).
Code:
ColorHelper.cs
public static int get(int a, int b, int c, int d)
{
return (get(d) << 24) + (get(c) << 16) + (get(b) << 8) + (get(a));
}
public static int get(int d)
{
if (d < 0) return 255;
int r = d / 100 % 10;
int g = d / 10 % 10;
int b = d % 10;
return r * 36 + g * 6 + b;
}
Screen.cs
public void render(int xp, int yp, int tile, int colors, int bits)
{
xp -= xOffset;
yp -= yOffset;
var mirrorX = (bits & BIT_MIRROR_X) > 0;
var mirrorY = (bits & BIT_MIRROR_Y) > 0;
var xTile = tile % 32;
var yTile = tile / 32;
var toffs = xTile * 8 + yTile * 8 * sheet.width;
for (int y = 0; y < 8; y++)
{
int ys = y;
if (mirrorY) ys = 7 - y;
if (y + yp < 0 || y + yp >= h) continue;
for (int x = 0; x < 8; x++)
{
if (x + xp < 0 || x + xp >= w) continue;
int xs = x;
if (mirrorX) xs = 7 - x;
int col = (colors >> (sheet.pixels[xs + ys * sheet.width + toffs] * 8)) & 255;
if (col < 255)
pixels[(x + xp) + (y + yp) * w] = col;
}
}
}
public void render(int xp, int yp, int tile, int colors, int bits) = **"int colors" represent
ColorHelper.get(-1,50,250,455)** (Those numbers are fictional only for example.)
Exmaple of rendering something:
//Hight of pixels 8 pixels * 2 from sprite so 16 pixels in x axis in sprite is rendered
int h = 2;
//Width of pixels 16 pixels * 13 in y axis in sprite is rendered
int w = 13;
//4 Colors sprite which i dont want !
int titleColor = ColorHelper.get(0, 010, 131, 551);
//Padding Left
int xo = (screen.w - w * 8) / 2;
//Padding top
int yo = 24;
for (int y = 0; y < h; y++)
{
for (int x = 0; x < w; x++)
{
//public void render(int xp, int yp, int tile, int colors, int bits) in Screen.cs 8x8 pixels are rendered
screen.render(xo + x * 8, yo + y * 8, x + (y + 6) * 32, titleColor, 0);
}
}
What I want to achieve
When i debug a "int colors" from "public void render" i get rly weird numbers which is unreadable for me. (You can try it on your own. Just create a Console Application and paste THIS.
So am I here to ask you. Is anyone who can help me remake this project, that I can get all pixels from sprite with colors as they are in png file without converting from ColorHelper.get function ?. I just want to render png as it is.
If not, its ok,I will try to continue with struggling :D. I am actully trying to solve this about 3 days straight. I think its way to hard to remake that.
Maybe I want a lot, but my brain is not capable to understand this. Only this thing is holding me up from continuing. I hope you understand that.
Not important info :
If someone says that i am jumping into big projects and I should try easier stuff. I am working in Unity since 2017 and i want to have full controll of my projects and obtain some better knowladge of codding. Learnin trough the games is for my best option, coz i like games. :) For those who doesnt understand me, I am rly sorry but I am not from America, i am from Czech.
Thank you very much.
I don't know how to comment a long block of code so I'll just put it in here. I think the .get(4 params) is trying to do this
//rgba ranges from 0 - 255
public static UInt32 get(int b, int g, int r, int a)
{
UInt32 rgba = ((UInt32)a << 24) | ((UInt32)r << 16)| ((UInt32)g << 8) |
((UInt32)b);
return rgba; //this will return a 32 bit integer like 3869934080 for
//example
}
So I don't think it is taking in 4 colors at all, it's trying to take in 1 color through it's rgba input (I don't understand why it would use 555 as a range and the point of get(1 parameters) ~ maybe someone can explain it)
So if you want to display the colors you want, replace parameters r,g,b,a which in turns gives you a 32 bit unsigned int (this would probably be passed into the render function.
Resource about bit shifting here
Dubre den,
To bypass the processing step and directly load the file.
To load a .png file to a Texture2D use:
Texture2D image = Texture2D.FromFile(graphicsDevice, #"C:\path\file.png");
Obviously, image must be used in the containing scope.
I'm currently undertaking a university project that involves object detection and recognition with a Kinect. Now I'm using the MapDethFrameToColorSpace method for coordinating the depth to rgb. I believe the issue is to with this loop here
for (int i = 0; i < _depthData.Length; ++i)
{
ColorSpacePoint newpoint = cPoints[i];
if (!float.IsNegativeInfinity(newpoint.X) && !float.IsNegativeInfinity(newpoint.Y))
int colorX = (int)Math.Floor(newpoint.X + 0.5);
int colorY = (int)Math.Floor(newpoint.Y + 0.5);
if ((colorX >= 0) && (colorX < colorFrameDescription.Width) && (colorY >= 0) && (colorY < colorFrameDescription.Height))
{
int j = ((colorFrameDescription.Width * colorY) + colorX) * bytesPerPixel;
int newdepthpixel = i * 4;
displaypixel[newdepthpixel] = colorpixels[(j)]; //B
displaypixel[newdepthpixel + 1] = colorpixels[(j + 1)]; //G
displaypixel[newdepthpixel + 2] = colorpixels[(j + 1)]; //R
displaypixel[newdepthpixel + 3] = 255; //A*/
}
It appears that the indexing is not correct or there are pixels/depth values missing because the output appears to be multiples of the same image but small and with a limited x index.
http://postimg.org/image/tecnvp1nx/
Let me guess: Your output image (displaypixel) is 1920x1080 pixels big? (Though from the link you posted, it seems to be 1829×948?)
That's your problem. MapDethFrameToColorSpace returns the corresponding position in the color image for each depth pixels. That means, you get 512x424 values. Putting those into a 1920x1080 image means only about 10% of the image is filled, and the part that's filled will be jumbled.
If you make your output image 512x424 pixels big instead, it should give you an image like the second on in this article.
Or you could keep your output image at 1920x1080, but instead of putting one pixel after the other, you'd also calculate the position where to put the pixel. So instead doing
int newdepthpixel = i * 4;
you'd need to do
int newdepthpixel = ((colorFrameDescription.Width * colorY) + colorX) * 4;
That would give you a 1920x1080 image, but with only 512x424 pixels filled, with lots of space in between.
I need the ability to determine which Shape a given point falls within. There will be overlapped shapes and I need to find the Shape with the smallest area. For example, given the Shapes and points illustrated in the image below the following would be true:
Point 3 - collides with star
Point 2 - collides with diamond
Point 1 - collides with circle
Given this, I would like to know if there is a built in way to do what is needed.
If you are drawing these shapes manually, you could do a second drawing pass into a separate buffer, and instead of drawing the shape, you write an ID into the buffer if the pixel is within the shape. Then your hit test just has to index into that buffer and retrieve the ID. You would get to re-use your drawing code completely, and it scales much better when you have more shapes, vertices, and hits to test.
I've arrived at a solution that meets the requirements, still interested in hearing if there is a better way of doing this. My approach is as follows: do a hit-test by bounding box, then a geometric hit test based on the type of geometry.
For Polygons, I've adapted the C code mentioned http://www.ecse.rpi.edu/Homepages/wrf/Research/Short_Notes /pnpoly.html to work in C#.
int pnpoly(int nvert, float *vertx, float *verty, float testx, float testy)
{
int i, j, c = 0;
for (i = 0, j = nvert-1; i < nvert; j = i++) {
if ( ((verty[i]>testy) != (verty[j]>testy)) &&
(testx < (vertx[j]-vertx[i]) * (testy-verty[i]) / (verty[j]-verty[i]) + vertx[i]) )
c = !c;
}
return c;
}
For Ellipses, I've adaptated this code: http://msdn.microsoft.com/en-us/library/aa231172%28v=vs.60%29.aspx
BOOL CCircCtrl::InCircle(CPoint& point)
{
CRect rc;
GetClientRect(rc);
GetDrawRect(&rc);
// Determine radii
double a = (rc.right - rc.left) / 2;
double b = (rc.bottom - rc.top) / 2;
// Determine x, y
double x = point.x - (rc.left + rc.right) / 2;
double y = point.y - (rc.top + rc.bottom) / 2;
// Apply ellipse formula
return ((x * x) / (a * a) + (y * y) / (b * b) <= 1);
}
I'm updating a plugin for Paint.net which i made some months ago, it's called Simulate Color Depth and it reduces the number of colors in the image to the chosen BPP and for a long time it have had dithering included but NEVER ordered dithering and i thought it would be a nice addition to have that in so i started to search on the internet for something useful, i ended up on this wiki page here http://en.wikipedia.org/wiki/Ordered_dithering, and tried to do as written in the pseudo code
for (int y = 0; x < image.Height; y++)
{
for (int x = 0; x < image.Width; x++)
{
Color color = image.GetPixel(x, y);
color.R = color.R + bayer8x8[x % 8, y % 8];
color.G = color.G + bayer8x8[x % 8, y % 8];
color.B = color.B + bayer8x8[x % 8, y % 8];
image.SetPixel(x, y, GetClosestColor(color, bitdepth);
}
}
but the result is way too bright so i decided to check the wiki page again and then i see that there's a "1/65" to the right of the threshold map which got me thinking of both error diffusing (yes i know, weird huh?) and dividing the value i get from bayer8x8[x % 8, y % 8] with 65 and then multiply the value with the color channels, but either the results were messy or else still too bright (as i remember it) but the results were nothing like i have seen elsewhere, either too bright, too high contrast or too messy and i haven't found anything really useful searching through the internet, so do anyone know how i can get this bayer dithering working properly?
Thanks in advance, Cookies
I don't think there's anything wrong with your original algorithm (from Wikipedia). The brightness disparity is probably an artifact of monitor gamma. Check Joel Yliluoma's Positional Dithering Algorithm, the appendix about gamma correction from this article about a dithering algorithm invented by Joel Yliluoma (http://bisqwit.iki.fi/story/howto/dither/jy/#Appendix%201GammaCorrection) to see an explanation of the effect (NB: page is quite graphics-heavy).
Incidentally, perhaps the (apparently public-domain) algorithm detailed in that article may be the solution to your problem...
Try this:
color.R = color.R + bayer8x8[x % 8, y % 8] * GAP / 65;
Here GAP should be the distance between the two nearest color thresholds. This depends on the bits per pixel.
For example, if you are converting the image to use 4 bits for the red component of each pixel, there are 16 levels of red total. They are: R=0, R=17, R=34, ... R=255. So GAP would be 17.
Found a solution, levels is the amount of colors the destination images should have and d is the divisor (this is normalized from my code (which uses paint.net classes) to simple bitmap editting with GetPixel and SetPixel)
private void ProcessDither(int levels, int d, Bitmap image)
{
levels -= 1;
double scale = (1.0 / 255d);
int t, l;
for ( int y = rect.Top; y < rect.Bottom; y++ )
{
for ( int x = rect.Left; x < rect.Right; x++)
{
Color cp = image.GetPixel(x, y);
int threshold = matrix[y % rows][x % cols];
t = (int)(scale * cp.R * (levels * d + 1));
l = t / d;
t = t - l * d;
cp.R = Clamp(((l + (t >= threshold ? 1 : 0)) * 255 / levels));
t = (int)(scale * cp.G * (levels * d + 1));
l = t / d;
t = t - l * d;
cp.G = Clamp(((l + (t >= threshold ? 1 : 0)) * 255 / levels));
t = (int)(scale * cp.B * (levels * d + 1));
l = t / d;
t = t - l * d;
cp.B = Clamp(((l + (t >= threshold ? 1 : 0)) * 255 / levels));
image.SetPixel(x, y, cp);
}
}
}
private byte Clamp(int val)
{
return (byte)(val < 0 ? 0 : val > 255 ? 255 : val);
}
I recently was put in front of the problem of cropping and resizing images. I needed to crop the 'main content' of an image for example if i had an image similar to this:
(source: msn.com)
the result should be an image with the msn content without the white margins(left& right).
I search on the X axis for the first and last color change and on the Y axis the same thing. The problem is that traversing the image line by line takes a while..for an image that is 2000x1600px it takes up to 2 seconds to return the CropRect => x1,y1,x2,y2 data.
I tried to make for each coordinate a traversal and stop on the first value found but it didn't work in all test cases..sometimes the returned data wasn't the expected one and the duration of the operations was similar..
Any idea how to cut down the traversal time and discovery of the rectangle round the 'main content'?
public static CropRect EdgeDetection(Bitmap Image, float Threshold)
{
CropRect cropRectangle = new CropRect();
int lowestX = 0;
int lowestY = 0;
int largestX = 0;
int largestY = 0;
lowestX = Image.Width;
lowestY = Image.Height;
//find the lowest X bound;
for (int y = 0; y < Image.Height - 1; ++y)
{
for (int x = 0; x < Image.Width - 1; ++x)
{
Color currentColor = Image.GetPixel(x, y);
Color tempXcolor = Image.GetPixel(x + 1, y);
Color tempYColor = Image.GetPixel(x, y + 1);
if ((Math.Sqrt(((currentColor.R - tempXcolor.R) * (currentColor.R - tempXcolor.R)) +
((currentColor.G - tempXcolor.G) * (currentColor.G - tempXcolor.G)) +
((currentColor.B - tempXcolor.B) * (currentColor.B - tempXcolor.B))) > Threshold))
{
if (lowestX > x)
lowestX = x;
if (largestX < x)
largestX = x;
}
if ((Math.Sqrt(((currentColor.R - tempYColor.R) * (currentColor.R - tempYColor.R)) +
((currentColor.G - tempYColor.G) * (currentColor.G - tempYColor.G)) +
((currentColor.B - tempYColor.B) * (currentColor.B - tempYColor.B))) > Threshold))
{
if (lowestY > y)
lowestY = y;
if (largestY < y)
largestY = y;
}
}
}
if (lowestX < Image.Width / 4)
cropRectangle.X = lowestX - 3 > 0 ? lowestX - 3 : 0;
else
cropRectangle.X = 0;
if (lowestY < Image.Height / 4)
cropRectangle.Y = lowestY - 3 > 0 ? lowestY - 3 : 0;
else
cropRectangle.Y = 0;
cropRectangle.Width = largestX - lowestX + 8 > Image.Width ? Image.Width : largestX - lowestX + 8;
cropRectangle.Height = largestY + 8 > Image.Height ? Image.Height - lowestY : largestY - lowestY + 8;
return cropRectangle;
}
}
One possible optimisation is to use Lockbits to access the color values directly rather than through the much slower GetPixel.
The Bob Powell page on LockBits is a good reference.
On the other hand, my testing has shown that the overhead associated with Lockbits makes that approach slower if you try to write a GetPixelFast equivalent to GetPixel and drop it in as a replacement. Instead you need to ensure that all pixel access is done in one hit rather than multiple hits. This should fit nicely with your code provided you don't lock/unlock on every pixel.
Here is an example
BitmapData bmd = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, b.PixelFormat);
byte* row = (byte*)bmd.Scan0 + (y * bmd.Stride);
// Blue Green Red
Color c = Color.FromArgb(row[x * pixelSize + 2], row[x * pixelSize + 1], row[x * pixelSize]);
b.UnlockBits(bmd);
Two more things to note:
This code is unsafe because it uses pointers
This approach depends on pixel size within Bitmap data, so you will need to derive pixelSize from bitmap.PixelFormat
GetPixel is probably your main culprit (I recommend running some profiling tests to track it down), but you could restructure the algorithm like this:
Scan first row (y = 0) from left-to-right and right-to-left and record the first and last edge location. It's not necessary to check all pixels, as you want the extreme edges.
Scan all subsequent rows, but now we only need to search outward (from center toward edges), starting at our last known minimum edge. We want to find the extreme boundaries, so we only need to search in the region where we could find new extrema.
Repeat the first two steps for the columns, establishing initial extrema and then using those extrema to iteratively bound the search.
This should greatly reduce the number of comparisons if your images are typically mostly content. The worst case is a completely blank image, for which this would probably be less efficient than the exhaustive search.
In extreme cases, image processing can also benefit from parallelism (split up the image and process it in multiple threads on a multi-core CPU), but this is quite a bit of additional work and there are other, simpler changes you still make. Threading overhead tends to limit the applicability of this technique and is mainly helpful if you expect to run this thing 'realtime', with dedicated repeated processing of incoming data (to make up for the initial setup costs).
This won't make it better on the order... but if you square your threshold, you won't need to do a square root, which is very expensive.
That should give a significant speed increase.