Polygon difference - strange results from Clipperlib - c#

I am trying to create iso-area polygons ("donuts") from a set of contours. This is the process:
Generate the contours.
Sort the contours into a tree structure, such that all contours held within a specific contour are children of that contour.
For each contour, execute a difference operation with all of its children, using Clipperlib.
The resulting polygon(s) and holes constitute the iso-area "donut". These iso-areas can then be rendered as a contour map, or used for other purposes. Note that if all I wanted to do was render the contours, I could stop after the initial sort and render the contours in order, such that inner-most render on top. I do require the actual areas.
First up - Clipperlib is a fantastic library, and one that I would be very happy to pay good money for - thanks Angus!
My issue is that I seem to get some strange results from the difference operation in certain situations - I suspect this may be user error on my part, so I will illustrate the problem:
This image shows two polygons - the subject is outlined in red, and the children are filled in blue. I wish to subtract the children from the subject. Note the small area near the mouse-pointer. What I would expect from this diff is two outer polygons - the small one near the mouse pointer and the large one. I would further expect all of the "islands" to be holes within the second, large, polygon.
What I actually get is two outers (as expected), with children (holes) associated with each:
In the second image, the tiny polygon near the mouse pointer is the "outer", and all of the other filled polygons are "holes" belonging to it. Note that I show the outlines of both solutions in both images - just focus on the filled polygons.
I am executing clipperlib using the red polygon in the first image as the subject, and all of the children as clips. Clip type is ctDifference (I have also tried Xor with the same results - which they should be, given that all children are within the subject).
I am requesting a PolyTree back, which has two Childs. I am using the c# library, and have also tried v6.
On one level, the results that I need are all there - all of the "holes" are designated such, the problem is that many of these holes are returned as children of the tiny outer area at the top-right of the image. Am I reading the PolyTree incorrectly, using ClipperLib incorrectly, or is this result simply wrong?
On a further note - I noticed that the new ClipperLib (v6) now accepts Z values. I'm wondering now if there may be a better method than I am using for generating these iso-areas from a given collection of un-ordered contour lines?
thanks,
Matt
EDIT: I have uploaded the raw data for the polygons in a text file.
here is the link
The file has the subject polygon as the first set of vertices, followed by each of the children. Each polygon is represented as X/Y pairs on a single row, with a newline between each polygon.

In the second image, the tiny polygon near the mouse pointer is the "outer", and all of the other filled polygons are "holes" belonging to it. Note that I show the outlines of both solutions in both images - just focus on the filled polygons.
This sounds like there's a bug somewhere but it's hard to tell without the raw data.
Also, this isn't the best place for Clipper support, there is a discussion forum and a place to report suspected bugs at SourceForge. Anyhow, it's probably best now to post your raw data here (as little as possible please while still reproducing the problem).
Edit:
OK, I've had a look at the data and I don't understand why you believe ... "all of the other filled polygons are "holes" belonging to it".
PolyTree solutiontree = new PolyTree();
cpr.Execute(ClipType.ctDifference, solutiontree,
PolyFillType.pftNonZero, PolyFillType.pftNonZero);
solution = new Polygons(solutiontree.ChildCount);
foreach (PolyNode pn in solutiontree.Childs)
solution.Add(pn.Contour);
Just filtering the top level PolyNodes of the solution PolyTree (and top level nodes must be 'outers') using the code snippet above, this is what I get (the solution is shaded green) ...
There's no way from this result that "the tiny polygon near the mouse pointer" can own the other polygons. Having said that, there are evidently still holes in the solution so there's a bug somewhere that needs fixing.
Edit 2: I've found and fixed the bug and uploaded Clipper version 6.0.2 to the SourceForge repository. I'll need to do a bit more error checking before I formally update the main Zip package.
Edit 3:
Evidently still not right after all.
Edit 4:
I think I've finally nailed this bug (see revision 420 in SourceForge repository).
Follow up there.

Related

Tangram puzzle application

I am trying to create a WPF application using C# to run on Pixelsense that is basic version of the tangram puzzle. I am able to draw my 7 shapes and translate and rotate them all around the screen.
Could anyone give me advise regarding how I should go about saving the pattern (with shapes in specific positions and orientations) so that when a user creates the pattern next time, the application can match it to the saved one and tell the user if it's correct.
It's a pattern matching and recognition problem that I am trying to solve.
I have been stuck on this for a while now :(
Define the solution as a collection of objects with shapeType, position, and orientation properties. Have the solution include one shape at position 0, 0 and an orientation of 0. Now loop over all the shapes the user has actually placed to find the ones with a shapeType that matches the shape your solution has at 0,0,0. Calculate the position and orientation of every other shape relative to where the user put this one. Compare those values to the rest of your solution. You'll need to experiment with how much tolerance to allow because this stuff is not precise - to make the game fun, err on the side of having high tolerances. If needed, you can follow this up with some performance optimizations to only re-evaluate pieces that moved.
Hopefully you are using physical shape prices with tags on them instead of this purely a virtual game. I always wanted to build this when I was on the Surface team but it never happened. One challenge you will run into is defining how the tag's position/orientation relates to the actual shape. If you'll be putting tag stickers on multiple tangram sets, you almost certainly won't get the on precisely the same each time so you may need to add a "calibration" mode to your app (have the user place each piece in a specific spot and then push a button so you can record where the tag is relative to those spots). The TagVisualizer WPF control should help a lot for building your UI - definitely look into using it (this scenario was top of mind when we designed that API). The default behavior of that control (if you tell it the ID of a tag to look for but not how to visualize it) is a "crosshair" that can help you find tune your offset values.
Good luck! If you wouldn't mind recording a YouTube video when you are done and posting a comment here linking to it, I'd really appreciate that
You can use ObservableCollection or List of a custom class. That class can consist of various values such as position, orientation etc as properties.
When a new pattern is drawn or when the pattern change its position you can update that particular object stored in the collection. As you have all the details of the pattern(positions and orientation) you can iterate the for loop and check the position of the new pattern when added.

Simple algorithm to crop empty borders from an image by code?

Currently I'm seeking for a rather fast and reasonably accurate algorithm in C#/.NET to do these steps in code:
Load an image into memory.
Starting from the color at position (0,0), find the unoccupied space.
Crop away this unnecessary space.
I've illustrated what I want to achieve:
What I can imagine is to get the color of the pixel at (0,0) and then do some unsafe line-by-line/column-by-column walking through all pixels until I meet a pixel with another color, then cut away the border.
I just fear that this is really really slow.
So my question is:
Are you aware of any quick algorithmns (ideally without any 3rd party libraries) to cut away "empty" borders from an in-memory image/bitmap?
Side-note: The algorithm should be "reasonable accurate", not 100% accurate. Some tolerance like one line too much or too few cropped would be way OK.
Addition 1:
I've just finished implementing my brute force algorithm in the simplest possible manner. See the code over at Pastebin.com.
If you know your image is centered, you might try walking diagonally ( ie (0,0), (1,1), ...(n,n) ) until you have a hit, then backtrack one line at a time checking until you find an "empty" line (in each dimension). For the image you posted, it would cut a lot of comparisons.
You should be able to do that from 2 opposing corners concurrently to get some multi-core action.
Of course, hopefully you dont it the pathelogical case of 1 pixel wide line in the center of the image :) Or the doubly pathological case of disconnected objects in your image such that the whole image is centered, but nothing crosses the diagonal.
One improvement you could make is to give your "hit color" some tolerance (adjustable maybe?)
The algorithm you are suggesting is a brute force algorithm and will work all the time for all type of images.
but for special cases like, subject image is centered and is a continuous blob of colors (as you have displayed in your example), binary sort kind of algorithm can be applied.
start from center line (0,length/2) and start in one direction at a time, examine the lines as we do in binary search.
do it for all the sides.
this will reduce complexity to log n to the base 2
For starters, your current algorithm is basically the best possible.
If you want it to run faster, you could code it in c++. This tends to be more efficient than managed unsafe code.
If you stay in c#, you can parallel extensions to run it on multiple cores. That wont reduce the load on the machine but it will reduce the latency, if any.
If you happen to have a precomputed thumbnail for the image, you can apply your algo on the thumbnail first to get a rough idea.
First, you can convert your bitmap to a byte[] using LockBits(), this will be much faster than GetPixel() and won't require you to go unsafe.
As long as you don't naively search the whole image and instead search one side at a time, you nailed the algorithm 95%. Just make you are not searching already cropped pixels, as this might actually make the algorithm worse than the naive one if you have two adjacent edges that crop a lot.
A binary search can improve a tiny bit, but it's not that significant as it will maybe save you a line of search for each direction in the best case scenario.
Although i prefer the answer of Tarang, i'd like to give some hints on how to 'isolate' objects in an image by refering to a given foregroundcolor and backgroundcolor, which is called 'segmentation' and used when working in the field of 'optical inspection', where an image is not just cropped to some detected object but objects are counted and also measured, things you can measure on an object is area, contour, diameter etc..
First of all, usually you'll start really to walk through your image beginning at x/y coordinates 0,0 and walk from left to right and top to bottom until you'll find a pixel that has another value as the background. The sensitivity of the segmentation is given by defining the grayscale value of the background as well as the grayscale value of the foreground. You possibly will walk through the image as said, by coordinates, but from the programs view you'll just walk through an array of pixels. That means you'll have to deal with the formula that calculates the x/y coordinate to the pixel's index in the pixel array. This formula sure needs width and height of the image.
For your concern of cropping, i think when you've found the so called 'pivot point' of your foreground object, you'll usually walk along the found object by using a formula that detects neighbor pixels of the same foregeground value. If there is only one object to detect as in your case, it's easy to store those pixels coordinates that are north-most, east-most, south-most and west-most. These 4 coordinates mark the rectangle your object fits in. With this information you can calculate the new images (cropped image) width and height.

Algorithm to detect if a pixel is within a boundary

We're currently creating a simple application for image manipulation in Silverlight, and we've hit a bit of a snag. We want users to be able to select an area of an image (either by drawing a freehand line around their chosen area or by creating a polygon around it), and then be able to apply effects to the pixels within that selection.
Creating a selection of images is easy enough, but we want a really fast algorithm for deciding which pixels should be manipulated (ie. something to detect which pixels are within the user's selection).
We've thought of three possibilities so far, but we're sure that there must be a really efficient and quick way of doing this that's better than these.
1. Pixel by pixel.
We just go through every pixel in an image and check whether it's within the user selection. Obviously this is far too slow!
2. Using a Line Crossing Algorithim.
The type of thing seen here.
3. Flood Fill.
Select the pixels along the path of the selection and then perform a flood fill within that selection. This might work fine.
This must a problem that's commonly solved, so we're guessing there's a ton more solutions that we've not even thought of.
What would you recommend?
Flood fill algorithm is a good choice.
Take a look at this implementation:
Queue-Linear Flood Fill: A Fast Flood Fill Algorithm
You should be able to use your polygon to create a clipping path. The mini-language for describing polygons for Silverlight is quiet well documented.
Alter the pixels on a copy of your image (all pixels is usually easy to modify than some pixels), then use the clipping path to render only the desired area of the changes back to the original image (probably using an extra buffer bitmap for the result).
Hope this helps. Just throwing the ideas out and see if any stick :)

Z-fighting between lines in XNA

I'm having a problem where drawing a grid using LineList and another (larger) grid overlapping it will make them flicker due to z-fighting. Using DepthBias will reduce that kind of problem when polygons and lines overlap but it apparently doesn't work when drawing lines in two separate DrawIndexedPrimitives calls.
Currently I "fixed" it by adding to the position of the second grid a small vector pointing towards the camera to simulate the DepthBias but the problem still happens when the camera is far from the grids.
Is there a better way to work around this problem?
From what I've heard you should take a look at your clip-planes. Example thread: xna.com
Edit: Dunno, about grids though, but you could always try! :)
Unfortunately, this is the natural behavior due to the limited precision of 32bit floating point numbers (as used by the depth buffer). You can either translate one set of lines minimally (as You do now) and try to chose Your clipping planes as close to each other as possible (as Rob mentioned), or:
Disable the depth buffer by setting device.RenderState.CompareFunction = CompareFunction.Allways, not by actually disabling the buffer!
Draw all Your lines.
Enable the depth buffer again by reversing the changes in step 1.
Draw all Your other geometry.

How can I extract objects from a bitmap image?

I have a bitmap with black background and some random objects in white. How can I identify these separate objects and extract them from the bitmap?
It should be pretty simple to find the connected white pixel coordinates in the image if the pixels are either black or white. Start scanning pixels row by row until you find a white pixel. Keep track of where you found it, create a new data structure to hold its connected object. Do a recursive search from that pixel to its surrounding pixels, add each connected white pixel's coordinates to the data structure. When your search can't find any more connected white pixels "end" that object. Go back to where you started and continue scanning pixels. Each time you find a white pixel see if it is in one of your existing "objects". If not, create a new object and repeat your search, adding connected white pixels as you go. When you are done, you should have a set of data structures representing collections of connected white pixels. These are your objects. If you need to identify what they are or simplify them into shapes, you'll need to do some googling -- I can't help you there. It's been too long since I took that computer vision course.
Feature extraction is a really complex topic and your question didn't expose the issues you face and the nature of the objects you want to extract.
Usually morphological operators help a lot for such problems (reduce noise, fill gaps, ...) I hope you already discovered AForge. Before you reinvent the wheel have a look at it. Shape recognition or blob analysis are buzz works you can have a look at in google to get some ideas for solutions to your problem.
There are several articles on CodeProject that deals with these kinds of image filters. Unfortunately, I have no idea how they work (and if I did, the answer would probably be too long for here ;P ).
1) Morphological operations to make the objects appear "better"
2) Segmentation
3) Classification
Each topic is a big one. There are simple approches but your description is not too detailed...

Categories

Resources