Algorithm to detect if a pixel is within a boundary - c#

We're currently creating a simple application for image manipulation in Silverlight, and we've hit a bit of a snag. We want users to be able to select an area of an image (either by drawing a freehand line around their chosen area or by creating a polygon around it), and then be able to apply effects to the pixels within that selection.
Creating a selection of images is easy enough, but we want a really fast algorithm for deciding which pixels should be manipulated (ie. something to detect which pixels are within the user's selection).
We've thought of three possibilities so far, but we're sure that there must be a really efficient and quick way of doing this that's better than these.
1. Pixel by pixel.
We just go through every pixel in an image and check whether it's within the user selection. Obviously this is far too slow!
2. Using a Line Crossing Algorithim.
The type of thing seen here.
3. Flood Fill.
Select the pixels along the path of the selection and then perform a flood fill within that selection. This might work fine.
This must a problem that's commonly solved, so we're guessing there's a ton more solutions that we've not even thought of.
What would you recommend?

Flood fill algorithm is a good choice.
Take a look at this implementation:
Queue-Linear Flood Fill: A Fast Flood Fill Algorithm

You should be able to use your polygon to create a clipping path. The mini-language for describing polygons for Silverlight is quiet well documented.
Alter the pixels on a copy of your image (all pixels is usually easy to modify than some pixels), then use the clipping path to render only the desired area of the changes back to the original image (probably using an extra buffer bitmap for the result).
Hope this helps. Just throwing the ideas out and see if any stick :)

Related

How to hide "cracks" between GeometryModel3Ds in one ModelVisual3D?

I am writing a geoscience visualization application that uses wpf 3d. The user needs to be able to zoom deep into detail and out quick with minimum resources taken. I've decided to divide my slice (ModelVisual3D) into subrectangles (GeometryModel3D), so that each has it's own texture that changes when the camera zooms in (similar to Google maps).
The problem is that "cracks" are appearing between subrectangles, even though they actually have no empty space between them.
How to hide these? or is there any other way to assign multiple materials with different sizes to one ModelVisual3D?
PS I've tried making the background gray, light-gray, silver and white-smoke. It helps a little, but it's not acceptable. I've also tried overlapping the subrectangles, with no result.
Instead of your current setup you might want to make several textures at different resolutions and switch between these depending on the zoom level. (Mipmaps)
When getting really close you might replace the entire object and switch it for a much smaller one) and use a highly detailed texture.
It will require a bit more pre-processing but you will be able to use a single geometry.
Seems like changing ImageBrush's stretch to Stretch.None and using textures larger than the subsquare helps. Although now I need more precise control over texture coordinates for the surface.

Simple algorithm to crop empty borders from an image by code?

Currently I'm seeking for a rather fast and reasonably accurate algorithm in C#/.NET to do these steps in code:
Load an image into memory.
Starting from the color at position (0,0), find the unoccupied space.
Crop away this unnecessary space.
I've illustrated what I want to achieve:
What I can imagine is to get the color of the pixel at (0,0) and then do some unsafe line-by-line/column-by-column walking through all pixels until I meet a pixel with another color, then cut away the border.
I just fear that this is really really slow.
So my question is:
Are you aware of any quick algorithmns (ideally without any 3rd party libraries) to cut away "empty" borders from an in-memory image/bitmap?
Side-note: The algorithm should be "reasonable accurate", not 100% accurate. Some tolerance like one line too much or too few cropped would be way OK.
Addition 1:
I've just finished implementing my brute force algorithm in the simplest possible manner. See the code over at Pastebin.com.
If you know your image is centered, you might try walking diagonally ( ie (0,0), (1,1), ...(n,n) ) until you have a hit, then backtrack one line at a time checking until you find an "empty" line (in each dimension). For the image you posted, it would cut a lot of comparisons.
You should be able to do that from 2 opposing corners concurrently to get some multi-core action.
Of course, hopefully you dont it the pathelogical case of 1 pixel wide line in the center of the image :) Or the doubly pathological case of disconnected objects in your image such that the whole image is centered, but nothing crosses the diagonal.
One improvement you could make is to give your "hit color" some tolerance (adjustable maybe?)
The algorithm you are suggesting is a brute force algorithm and will work all the time for all type of images.
but for special cases like, subject image is centered and is a continuous blob of colors (as you have displayed in your example), binary sort kind of algorithm can be applied.
start from center line (0,length/2) and start in one direction at a time, examine the lines as we do in binary search.
do it for all the sides.
this will reduce complexity to log n to the base 2
For starters, your current algorithm is basically the best possible.
If you want it to run faster, you could code it in c++. This tends to be more efficient than managed unsafe code.
If you stay in c#, you can parallel extensions to run it on multiple cores. That wont reduce the load on the machine but it will reduce the latency, if any.
If you happen to have a precomputed thumbnail for the image, you can apply your algo on the thumbnail first to get a rough idea.
First, you can convert your bitmap to a byte[] using LockBits(), this will be much faster than GetPixel() and won't require you to go unsafe.
As long as you don't naively search the whole image and instead search one side at a time, you nailed the algorithm 95%. Just make you are not searching already cropped pixels, as this might actually make the algorithm worse than the naive one if you have two adjacent edges that crop a lot.
A binary search can improve a tiny bit, but it's not that significant as it will maybe save you a line of search for each direction in the best case scenario.
Although i prefer the answer of Tarang, i'd like to give some hints on how to 'isolate' objects in an image by refering to a given foregroundcolor and backgroundcolor, which is called 'segmentation' and used when working in the field of 'optical inspection', where an image is not just cropped to some detected object but objects are counted and also measured, things you can measure on an object is area, contour, diameter etc..
First of all, usually you'll start really to walk through your image beginning at x/y coordinates 0,0 and walk from left to right and top to bottom until you'll find a pixel that has another value as the background. The sensitivity of the segmentation is given by defining the grayscale value of the background as well as the grayscale value of the foreground. You possibly will walk through the image as said, by coordinates, but from the programs view you'll just walk through an array of pixels. That means you'll have to deal with the formula that calculates the x/y coordinate to the pixel's index in the pixel array. This formula sure needs width and height of the image.
For your concern of cropping, i think when you've found the so called 'pivot point' of your foreground object, you'll usually walk along the found object by using a formula that detects neighbor pixels of the same foregeground value. If there is only one object to detect as in your case, it's easy to store those pixels coordinates that are north-most, east-most, south-most and west-most. These 4 coordinates mark the rectangle your object fits in. With this information you can calculate the new images (cropped image) width and height.

Detecting a blob of color in an image

I have an image that is a depth heatmap that I've filtered out anything further away than the first 25% of the image.
It looks something like this:
There are two blobs of color in the image, one is my hand (with part of my face behind it), and the other is the desk in the lower left corner. How can I search the image to find these blobs? I would like to be able to draw a rectangle around them if possible.
I can also do this (ignore shades, and filter to black or white):
Pick a random pixel as a seed pixel. This becomes area A. Repeatedly expand A until A doesn't get any bigger. That's your area.
The way to expand A is by looking for neighbor pixels to A, such that they have similar color to at least one neighboring pixel in A.
What "similar color" means to you is somewhat variable. If you can make exactly two colors, as you say in another answer, then "similar" is "equal". Otherwise, "similar" would mean colors that have RGB values or whatnot where each component of the two colors is within a small amount of each other (i.e. 255, 128, 128 is similar to 252, 125, 130).
You can also limit the selected pixels so they must be similar to the seed pixel, but that works better when a human is picking the seed. (I believe this is what is done in Photoshop, for example.)
This can be better than edge detection because you can deal with gradients without filtering them out of existence, and you don't need to process the resulting detected edges into a coherent area. It has the disadvantage that a gradient can go all the way from black to white and it'll register as the same area, but that may be what you want. Also, you have to be careful with the implementation or else it will be too slow.
It might be overkill for what you need, but there's a great wrapper for C# for the OpenCV libraries.
I have successfully used OpenCV in C++ for blob detection, so you might find it useful for what you're trying to do.
http://www.emgu.com/wiki/index.php/Main_Page
and the wiki page on OpenCV:
http://en.wikipedia.org/wiki/OpenCV
Edited to add: Here is a blobs detection library for Emgu in C#. There is even some nice features of ordering the blobs by descending area (useful for filtering out noise).
http://www.emgu.com/forum/viewtopic.php?f=3&t=205
Edit Again:
If Emgu is too heavyweight, Aforge.NET also includes some blob detection methods
http://www.aforgenet.com/framework/
If the image really is only two or more distinct colours (very little blur between colours), it is an easy case for an edge detection algorithm.
You can use something like the code sample from this question : find a color in an image in c#
It will help you find the x/y of specific colors in your image. Then you could use the min x/max x and the min y/max y to draw your rectangles.
Detect object from image based on object color by C#.
To detect a object based on its color, there is an easy algorithm for that. you have to choose a filtering method. Steps normally are:
Take the image
Apply ur filtering
Apply greyscalling
Subtract background and get your objects
Find position of all objects
Mark the objects
First you have to choose a filtering method, there are many filtering method provided for C#. Mainly I prefer AForge filters, for this purpose they have few filter:
ColorFiltering
ChannelFiltering
HSLFiltering
YCbCrFiltering
EuclideanColorFiltering
My favorite is EuclideanColorFiltering. It is easy and simple. For information about other filters you can visit link below. You have to download AForge dll for apply these in your code.
More information about the exact steps can be found here: Link

filter to reverse anti-alias effects

I have bitmaps of lines and text that have anti-alias applied to them. I want to develop a filter that removes tha anti-alias affect. I'm looking for ideas on how to go about doing that, so to start I need to understand how anti-alias algorithms work. Are there any good links, or even code out there?
I need to understand how anti-alias algorithms work
Anti-aliasing works by rendering the image at a higher resolution before it is down-sampled to the output resolution. In the down-sampling process the higher resolution pixels are averaged to create lower resolution pixels. This will create smoother color changes in the rendered image.
Consider this very simple example where a block outline is rendered on a white background.
It is then down-sampled to half the resolution in the process creating pixels having shades of gray:
Here is a more realistic demonstration of anti-aliasing used to render the letter S:
I am not familiar at all with C# programming, but I do have experience with graphics. The closest thing to an anti-anti-alias filter would be a sharpening filter (at least in practice, using Photoshop), usually applied multiple times, depending on the desired effect. The sharpening filter work best when there is great contrast already between the anti-aliased elements and the background, and even better if the background is one flat color, rather than a complex graphic.
If you have access to any advanced graphics editor, you could try a few tests, and if you're happy with the results you could start looking into sharpening filters.
Also, if you are working with grayscale bitmaps, an even better solution is to convert it to a B/W image - that will remove any anti-aliasing on it.
Hope this helps at least a bit :)

How can I extract objects from a bitmap image?

I have a bitmap with black background and some random objects in white. How can I identify these separate objects and extract them from the bitmap?
It should be pretty simple to find the connected white pixel coordinates in the image if the pixels are either black or white. Start scanning pixels row by row until you find a white pixel. Keep track of where you found it, create a new data structure to hold its connected object. Do a recursive search from that pixel to its surrounding pixels, add each connected white pixel's coordinates to the data structure. When your search can't find any more connected white pixels "end" that object. Go back to where you started and continue scanning pixels. Each time you find a white pixel see if it is in one of your existing "objects". If not, create a new object and repeat your search, adding connected white pixels as you go. When you are done, you should have a set of data structures representing collections of connected white pixels. These are your objects. If you need to identify what they are or simplify them into shapes, you'll need to do some googling -- I can't help you there. It's been too long since I took that computer vision course.
Feature extraction is a really complex topic and your question didn't expose the issues you face and the nature of the objects you want to extract.
Usually morphological operators help a lot for such problems (reduce noise, fill gaps, ...) I hope you already discovered AForge. Before you reinvent the wheel have a look at it. Shape recognition or blob analysis are buzz works you can have a look at in google to get some ideas for solutions to your problem.
There are several articles on CodeProject that deals with these kinds of image filters. Unfortunately, I have no idea how they work (and if I did, the answer would probably be too long for here ;P ).
1) Morphological operations to make the objects appear "better"
2) Segmentation
3) Classification
Each topic is a big one. There are simple approches but your description is not too detailed...

Categories

Resources