Coloured Line Detection Using AForge - c#

I have a requirement to scan various images for coloured lines, the result of this determines what we do with an image, no lines = delete, lines = save.
I have been meeting this requirement adequately by simply comparing the colour of each pixel to a list of known colours that we are looking for, if we find above a certain threshold of pixels then we are happy that there is something on the image that we are interested in.
I recently had to re-work this as we started to get highly compressed Jpegs and (for example) the red line ended up being made up of hundreds of shades of red - I got this working reliably but the process got me thinking that there mush be a better way so I have started to look at AForge to determine if it could be used to detect the different coloured lines.
I have spent a day looking into it and think that it will work but need some guidance on what the best approach/method will be as CV is a very big field and I only need to learn a very small part of it for the time being.
This is an example of one of the images
In this instance I'd want to find out about the red and blue lines.
I'd disregard the black ones.
Ive been reading and testing some things with hough line detection and have had some very limited success when detecting a STRAIGHT line on a black and white image but cant find many examples of detecting curved coloured lines.
All Im looking for is a little guidance on whether AForge is the best way forward (if it can even do what I want) and an idea of what the process would look like so that I can go and investigate the right things!

In case this is of use to anyone else in the future I found a way to do this, its still not perfect but has improved the reliability of our process.
Step one -> Remove all but the colour that we are interested in :
var c = Color.Red;
EuclideanColorFiltering filter = new EuclideanColorFiltering();
filter.CenterColor = new RGB(color.R, color.G, color.B);
filter.Radius = (short)radius;
filter.ApplyInPlace(input);
Step 2 -> Convert to gray scale
Grayscale.CommonAlgorithms.BT709.Apply(image);
Step 3 -> Run the result through a Hough
var lineTransform = new HoughLineTransformation();
lineTransform.ProcessImage(input);
HoughLine[] lines =
lineTransform.GetLinesByRelativeIntensity(_intensity);
Step one pretty much yields the same result that I used to get by scanning the image for pixels of a specific colour, but the HoughLineTransform has the effect of identifying which pixels form a line - removing a lot of the noise that we had on the highly compressed JPEGS.
There is still a bit of an issue in that the way that we are filtering out all but the colours that we are interested in doesnt work for all colours, we have quite a few shades of grey that we need to identify by that picks up the outlines of roads etc, so there is still work to do - but what I describe above has got us much closer to a solution.

Related

Identify groups of similar-coloured pixels

I am writing some software that periodically checks a camera image to identify whether an object has been introduced to the viewed scene. I am using ImageMagick in my WinForms software to compare two images to create a third. In the third image, a pixel is white if the first two images had a similar-coloured pixel, and black if they were different. So if the user sees a group of black pixels, they will know something has been placed in the scene that wasn't there previously, as seen here:
The user will not be seeing this image, so I would like my software to identify this for me. I don't need anything too complex from this analysis - just a Boolean for whether something was changed in the scene.
In my mind, there are two approaches to analysing this; counting the number of black pixels in the image, or I could write some algorithm to identify patches of black. My question is about the second approach, since it feels like the more correct approach. The image is a bit too noisy (you can see false positives on straight edges) for me to feel entirely comfortable with counting.
To identify a group, I have considered using some for loops to look at the colours of pixels surrounding every pixel but this seems like it would take forever. My processing time can't take more than a few seconds so I need to be wary of this. Are there cleaner or more-efficient ways to identify groups of similarly-coloured pixels? Or will I need to run loops and be as efficient as possible?
Threshold the image such that black pixels will have value 1 non black will have zero.
Use connected component labeling to find all the groups of connected black pixels. http://www.imagemagick.org/script/connected-components.php
Filter out the components that are too small or doesn't have the correct shape(for example you can have a long line on the sides so they have a lot of black pixels but you are not expecting to see a long line as a valid group of black pixels)
At this point I assume you have a sense of the scale of the objects that you are interested in capturing. For example, if you did have an entirely non-black screen (let's ignore noise for the purpose of this discussion), except you have a black pixel object that is only roughly 10 pixels in diameter. This seems to be frightfully small, and not enough information to be useful.
Once you have determined what is the minimum size of black mass you are willing to accept, I would then go about querying a staggered matrix.
i.e., a pattern like:
Use a little math to determine what acceptable noise is.
Once you have positive results, (pixels = black), investigate in those sectors.

Counting Pixels of a certain colour in XNA and c#

This is my first post here and I'm fairly desperate.
My issue is that I'm supposed to make a game for a client (more of a friend favour type of thing so no pay :( but meh jobs a job and I said I could do it, build rep and all that)
im using XNA 4 and therefore C#.
Now I need to figure out a way to count pixels of a certain colour, black to be specific. That's it.
I have drawn a bunch of sprites and they are NOT of black. Some of them are UI but they are all at the side of the screen so I should be able to ignore those pixels right?
I figured I should be able to read the buffers once I have made all my draw calls. But I just can't find any internet stuff (that's readable) that tells me whether or not that's even possible.
Read is the colour of the pixels I'm interested in is all I need to do. Lets say everything from width 50 to the edge of the window for example.
C# and XNA 4 answers are what i need.
whether you give me the answer on a silver platter or point me in the direction of something that will tell me how to do the various parts of it.
As a programmer goes I still count as a noob, just... a trained noob, so i might ask stupid questions to your excellent answers :)
You should be able to use GraphicsDevice.ResolveBackBuffer()
to copy the screen to a ResolveTexture2D. Then you can use Texture2D.GetData() to store the texture's color data in a Color[] array.
ResolveTexture2D texture = new ResolveTexture2D(
graphics.GraphicsDevice,
graphics.GraphicsDevice.PresentationParameters.BackBufferWidth,
graphics.GraphicsDevice.PresentationParameters.BackBufferHeight,
1,
graphics.GraphicsDevice.PresentationParameters.BackBufferFormat);
Color[] colorData = new Color[texture.Width * texture.Height];
texture.GetData<Color>(colorData);
From there you should be able to do your thing to count the colors while ignoring edge pixels.

Algorithm to detect if a pixel is within a boundary

We're currently creating a simple application for image manipulation in Silverlight, and we've hit a bit of a snag. We want users to be able to select an area of an image (either by drawing a freehand line around their chosen area or by creating a polygon around it), and then be able to apply effects to the pixels within that selection.
Creating a selection of images is easy enough, but we want a really fast algorithm for deciding which pixels should be manipulated (ie. something to detect which pixels are within the user's selection).
We've thought of three possibilities so far, but we're sure that there must be a really efficient and quick way of doing this that's better than these.
1. Pixel by pixel.
We just go through every pixel in an image and check whether it's within the user selection. Obviously this is far too slow!
2. Using a Line Crossing Algorithim.
The type of thing seen here.
3. Flood Fill.
Select the pixels along the path of the selection and then perform a flood fill within that selection. This might work fine.
This must a problem that's commonly solved, so we're guessing there's a ton more solutions that we've not even thought of.
What would you recommend?
Flood fill algorithm is a good choice.
Take a look at this implementation:
Queue-Linear Flood Fill: A Fast Flood Fill Algorithm
You should be able to use your polygon to create a clipping path. The mini-language for describing polygons for Silverlight is quiet well documented.
Alter the pixels on a copy of your image (all pixels is usually easy to modify than some pixels), then use the clipping path to render only the desired area of the changes back to the original image (probably using an extra buffer bitmap for the result).
Hope this helps. Just throwing the ideas out and see if any stick :)

filter to reverse anti-alias effects

I have bitmaps of lines and text that have anti-alias applied to them. I want to develop a filter that removes tha anti-alias affect. I'm looking for ideas on how to go about doing that, so to start I need to understand how anti-alias algorithms work. Are there any good links, or even code out there?
I need to understand how anti-alias algorithms work
Anti-aliasing works by rendering the image at a higher resolution before it is down-sampled to the output resolution. In the down-sampling process the higher resolution pixels are averaged to create lower resolution pixels. This will create smoother color changes in the rendered image.
Consider this very simple example where a block outline is rendered on a white background.
It is then down-sampled to half the resolution in the process creating pixels having shades of gray:
Here is a more realistic demonstration of anti-aliasing used to render the letter S:
I am not familiar at all with C# programming, but I do have experience with graphics. The closest thing to an anti-anti-alias filter would be a sharpening filter (at least in practice, using Photoshop), usually applied multiple times, depending on the desired effect. The sharpening filter work best when there is great contrast already between the anti-aliased elements and the background, and even better if the background is one flat color, rather than a complex graphic.
If you have access to any advanced graphics editor, you could try a few tests, and if you're happy with the results you could start looking into sharpening filters.
Also, if you are working with grayscale bitmaps, an even better solution is to convert it to a B/W image - that will remove any anti-aliasing on it.
Hope this helps at least a bit :)

How can I extract objects from a bitmap image?

I have a bitmap with black background and some random objects in white. How can I identify these separate objects and extract them from the bitmap?
It should be pretty simple to find the connected white pixel coordinates in the image if the pixels are either black or white. Start scanning pixels row by row until you find a white pixel. Keep track of where you found it, create a new data structure to hold its connected object. Do a recursive search from that pixel to its surrounding pixels, add each connected white pixel's coordinates to the data structure. When your search can't find any more connected white pixels "end" that object. Go back to where you started and continue scanning pixels. Each time you find a white pixel see if it is in one of your existing "objects". If not, create a new object and repeat your search, adding connected white pixels as you go. When you are done, you should have a set of data structures representing collections of connected white pixels. These are your objects. If you need to identify what they are or simplify them into shapes, you'll need to do some googling -- I can't help you there. It's been too long since I took that computer vision course.
Feature extraction is a really complex topic and your question didn't expose the issues you face and the nature of the objects you want to extract.
Usually morphological operators help a lot for such problems (reduce noise, fill gaps, ...) I hope you already discovered AForge. Before you reinvent the wheel have a look at it. Shape recognition or blob analysis are buzz works you can have a look at in google to get some ideas for solutions to your problem.
There are several articles on CodeProject that deals with these kinds of image filters. Unfortunately, I have no idea how they work (and if I did, the answer would probably be too long for here ;P ).
1) Morphological operations to make the objects appear "better"
2) Segmentation
3) Classification
Each topic is a big one. There are simple approches but your description is not too detailed...

Categories

Resources