I've written a bitmask filter in C#, to grow a border, where found pixels should grow border if there is enough space, but the areas may not touch each other
input mask output mask
.......... .....xxx...
.....x.... .....xxxx...
.....x.... xxx..xxxxxx.
..x..xxxx. xxx..xxxxxx.
.......... xxx..xxxxxx.
Using a 5x5 binary convolution that checks the border.
The border consists of 8 pixels (x-2,y-2), (x,y-2), (x+2,y-2)... i.e. corners and centers of borders.
If a pixel is found, I set its corresponding array P[i] to true.
So these 8 pixels are represented in a boolean array starting from P[0] trough p[7].
Next I count boolean trues.
If the count of P[.] > 3 then there is another object nearby and I don't grow the border there.
If the count is 1, so if for example (x-2,y)==true then write mask (x-1,y).
If the count is 2, so if for example (x+2,y)==true && (x+2,y+2)==true then write mask (x+1,y+1).
I've got quite a lot of boolean comparisons to eventually make this work.
I won't bore you width the dull code, but I'm wondering if the principle I use is ideal for this, or if there exists other methods to do this.
As I got a side effect (though not really a problem) that makes the shapes more cubic over every iteration. Which seems to happen less in programs such as PhotoShop.
=====
some additional info, the code had to run on a video stream, so i prefer fast ways. I wrote "dull" code, if i would print this its 80+ lines most is binary compare code and switch cases too make it fast, 80+ is a to long post for here, much easier to understand when explained in normal language.
Have you heard of mathematical morphology? You more or less implemented a thickening operator (though thickening is defined differently, I think the result is more or less similar), but there are more efficient ways to do the same thing.
One alternative approach is to:
Compute the distance transform of the inverse of your image. This is an image where the further away from your objects you get, the larger the grey value is. This image will have ridges exactly half-way between the objects. Let's call this image distance.
Compute the watershed of distance. This is a binary image with lines that run along the ridges of distance. Let's call this separation.
Threshold distance at the appropriate value to get objects grown by the amount you want them grown. If you want to grow objects by 10 pixels, threshold such that all pixels with a value less or equal to 10 are selected (true).
Subtract separation from distance. The pixels in distance that are set, and also set in separation, should be reset. You now have your original objects grown by your chosen distance, but still separate.
I won't bore you width the dull code...
Related
I am writing some software that periodically checks a camera image to identify whether an object has been introduced to the viewed scene. I am using ImageMagick in my WinForms software to compare two images to create a third. In the third image, a pixel is white if the first two images had a similar-coloured pixel, and black if they were different. So if the user sees a group of black pixels, they will know something has been placed in the scene that wasn't there previously, as seen here:
The user will not be seeing this image, so I would like my software to identify this for me. I don't need anything too complex from this analysis - just a Boolean for whether something was changed in the scene.
In my mind, there are two approaches to analysing this; counting the number of black pixels in the image, or I could write some algorithm to identify patches of black. My question is about the second approach, since it feels like the more correct approach. The image is a bit too noisy (you can see false positives on straight edges) for me to feel entirely comfortable with counting.
To identify a group, I have considered using some for loops to look at the colours of pixels surrounding every pixel but this seems like it would take forever. My processing time can't take more than a few seconds so I need to be wary of this. Are there cleaner or more-efficient ways to identify groups of similarly-coloured pixels? Or will I need to run loops and be as efficient as possible?
Threshold the image such that black pixels will have value 1 non black will have zero.
Use connected component labeling to find all the groups of connected black pixels. http://www.imagemagick.org/script/connected-components.php
Filter out the components that are too small or doesn't have the correct shape(for example you can have a long line on the sides so they have a lot of black pixels but you are not expecting to see a long line as a valid group of black pixels)
At this point I assume you have a sense of the scale of the objects that you are interested in capturing. For example, if you did have an entirely non-black screen (let's ignore noise for the purpose of this discussion), except you have a black pixel object that is only roughly 10 pixels in diameter. This seems to be frightfully small, and not enough information to be useful.
Once you have determined what is the minimum size of black mass you are willing to accept, I would then go about querying a staggered matrix.
i.e., a pattern like:
Use a little math to determine what acceptable noise is.
Once you have positive results, (pixels = black), investigate in those sectors.
I read that wpf programs can draw lines in between pixels and then it displays .5 the line on one pixel and .5 the line on the other pixel. Does this mean that there is no actual limit on how small of a line I can draw except as limited by hardware?
WPF uses double to represent all the geometric properties (like the length of a line). When it comes to actually outputting to the screen these will obviously be rounded to pixels but this happens right down at the core level. As far as you, as the programmer, need to know, you can design lines of any length between double.MinValue and double.MaxValue (±1.8E+308)
Currently I'm seeking for a rather fast and reasonably accurate algorithm in C#/.NET to do these steps in code:
Load an image into memory.
Starting from the color at position (0,0), find the unoccupied space.
Crop away this unnecessary space.
I've illustrated what I want to achieve:
What I can imagine is to get the color of the pixel at (0,0) and then do some unsafe line-by-line/column-by-column walking through all pixels until I meet a pixel with another color, then cut away the border.
I just fear that this is really really slow.
So my question is:
Are you aware of any quick algorithmns (ideally without any 3rd party libraries) to cut away "empty" borders from an in-memory image/bitmap?
Side-note: The algorithm should be "reasonable accurate", not 100% accurate. Some tolerance like one line too much or too few cropped would be way OK.
Addition 1:
I've just finished implementing my brute force algorithm in the simplest possible manner. See the code over at Pastebin.com.
If you know your image is centered, you might try walking diagonally ( ie (0,0), (1,1), ...(n,n) ) until you have a hit, then backtrack one line at a time checking until you find an "empty" line (in each dimension). For the image you posted, it would cut a lot of comparisons.
You should be able to do that from 2 opposing corners concurrently to get some multi-core action.
Of course, hopefully you dont it the pathelogical case of 1 pixel wide line in the center of the image :) Or the doubly pathological case of disconnected objects in your image such that the whole image is centered, but nothing crosses the diagonal.
One improvement you could make is to give your "hit color" some tolerance (adjustable maybe?)
The algorithm you are suggesting is a brute force algorithm and will work all the time for all type of images.
but for special cases like, subject image is centered and is a continuous blob of colors (as you have displayed in your example), binary sort kind of algorithm can be applied.
start from center line (0,length/2) and start in one direction at a time, examine the lines as we do in binary search.
do it for all the sides.
this will reduce complexity to log n to the base 2
For starters, your current algorithm is basically the best possible.
If you want it to run faster, you could code it in c++. This tends to be more efficient than managed unsafe code.
If you stay in c#, you can parallel extensions to run it on multiple cores. That wont reduce the load on the machine but it will reduce the latency, if any.
If you happen to have a precomputed thumbnail for the image, you can apply your algo on the thumbnail first to get a rough idea.
First, you can convert your bitmap to a byte[] using LockBits(), this will be much faster than GetPixel() and won't require you to go unsafe.
As long as you don't naively search the whole image and instead search one side at a time, you nailed the algorithm 95%. Just make you are not searching already cropped pixels, as this might actually make the algorithm worse than the naive one if you have two adjacent edges that crop a lot.
A binary search can improve a tiny bit, but it's not that significant as it will maybe save you a line of search for each direction in the best case scenario.
Although i prefer the answer of Tarang, i'd like to give some hints on how to 'isolate' objects in an image by refering to a given foregroundcolor and backgroundcolor, which is called 'segmentation' and used when working in the field of 'optical inspection', where an image is not just cropped to some detected object but objects are counted and also measured, things you can measure on an object is area, contour, diameter etc..
First of all, usually you'll start really to walk through your image beginning at x/y coordinates 0,0 and walk from left to right and top to bottom until you'll find a pixel that has another value as the background. The sensitivity of the segmentation is given by defining the grayscale value of the background as well as the grayscale value of the foreground. You possibly will walk through the image as said, by coordinates, but from the programs view you'll just walk through an array of pixels. That means you'll have to deal with the formula that calculates the x/y coordinate to the pixel's index in the pixel array. This formula sure needs width and height of the image.
For your concern of cropping, i think when you've found the so called 'pivot point' of your foreground object, you'll usually walk along the found object by using a formula that detects neighbor pixels of the same foregeground value. If there is only one object to detect as in your case, it's easy to store those pixels coordinates that are north-most, east-most, south-most and west-most. These 4 coordinates mark the rectangle your object fits in. With this information you can calculate the new images (cropped image) width and height.
I have an interesting problem. I'm almost there but am curious how others would tackle it. I want to display some multi-line text in a predefined area. I don't know what the text will be or how big the area will be so the function would have to be written generically. You can assume a standard font is always used but the point size is what must change.
Assume you have a function that will draw text that is passed to it in a string parameter. The function has a form object to draw in, and is also passed a rectangle object defining the bounding area of the text on the form. The function needs to display the text on the form in the given rectangle in as large a font as will fit. The challenge for me was is in calculating the size of the font to use to have the text fit as best it can, in the rectangle with minimal white space.
These 2 equations might be useful:
float pixels = (points *dpi)/72f;
float points = (pixels*72f)/dpi);
Also:
float dpi = CreateGraphics().DpiY;
Well, it is tricky. Calculating a point size directly isn't going to work, the width of the text is dependent on the font metrics. Binary search is an obvious strategy but it cannot work in practice. True-type hinting and word wrapping conspire to destabilize it.
I'd recommend you start with binary search, setting hi and lo to reasonable defaults like 72 and 6. Then when the range narrows down to, say, 5 points, start testing each individual point size until you find the largest one that fits. When you write the algorithm, do make sure you count on a size N that fits but a size N-1 that doesn't fit.
There is a significant problem with any solution, which is you need to determine this based on the width as well, which is completely dependent upon the font. This means you need to calculate the width of each word independently based on a predefined point size font. As you change the point size, it is not guaranteed to be consistent.
The solution won't be fast if you want it to be accurate.
I would suggest selecting two point sizes (say 6 and 18) that represent the smallest and mid- to high-point and compute the pixel width of each word in each point size. You could then compute the area of both sizes of text.
You could then extrapolate the area of the rectangle you find appropriate and determine the target size (width and height) using an arbitrary width/height ratio based on the length of text - there is an optimum readable width, for instance.
You will then need to iteratively attempt to word-wrap inside the rectangle and work backwards in point size until the text fits within the rectangle.
binary search over point sizes:
Start with the biggest available point size. If it doesn't fit, try half of that, ...
My idea is to grab each pixel, analyze the 255,255,255 value and give each pixel a chance to be in only 1 of 10 division I will have laid out.
This won't bring a full color representation, but my point is to make a ASCII that at least resemble the shapes of the objects in the pictures. Outline it so to speak.
Would this work?
If I've understood the process correctly the effect would similar to the "posterize" option you get in some paint packages where the number of colours is reduced to some arbitrary number.
It would work, here's a complete example in c#
http://www.codeproject.com/KB/web-image/AsciiArt.aspx
Most implementations of don't convert one pixel to one character. Instead they divide the image into rectangular regions of pixels, and then analyse each region. As well as the overall intensity of the region, you also want to look at how the intensity is spread over the region and pick a matching character.
If you limit your character set to just 10 characters however, you probably won't get a particularly good representation of the original image.