So I have one image that on paper they were drawing lines/circles on particular areas of the image that they had designated certain sides as area 1-4, then 5-6 in the center and then a few specific spots as area 7. They wanted to make this digital. So I put the image on a win form, made it so they can create lines and circles (eventually I'll have it so the lines are measured and stored in an array) However along with storing that I need to store what number area the line/circle was created in for tracking purpose so they can look back later to see where the majority of, in this case flaws, are being circled on the image. (The image represents a real machine part). So right now I'm stuck trying to figure out how to designate certain areas of this image as area 1-4, 5,6 (and multiple 7's).
Is it better to split each area up into its own photo and piece them together like a puzzle? Or is there a better way in code to do this leaving 1 image to work with?
Related
I am trying to stitch together a series of images created from multiple screen captures of a very large image. The user moved vertically and horizontally at maximum resolution over the source image and took screenshots (including window borders, taskbar, etc.) with some overlap, and wants to put them together again.
There are existing solutions that will stitch photos together, but my case is much simpler than creating a panorama of independent photos because the images match exactly.
I am struggling to come up with an efficient algorithm that will find the largest matching rectangle common to adjacent images. Is there an existing solution or can someone suggest a way?
I am using pseudocode from Longest common substring problem to find matches. This finds horizontal matches when I store the bitmap in a 1 dimensional array, but not vertical matches.
I want to measure the amount of empty space on a slide (in order to overcome slide overcrowding) in a PowerPoint Add-In. Having access to each shape on a slide, I was planning to calculate the amount of area each shape takes and then subtract it from the total area available. I was wondering if this is the most efficient method, or if could use something else, eg. using image processing techniques.
Unless you know that the slide background will always be plain/solid color, I don't think image processing techniques would help, and would probably necessitate exporting each slide as an image, which'd be more time consuming that stepping through the shapes on each slide.
Summing the area occupied by each shape and comparing it to the overall slide size would be a good rough answer. To do a better job, you'd want to account for overlapping shapes; two squares, one atop the other, would only occupy the area of one of them, assuming they're the same size. You may also want to consider the shapes on each slide's layout, and you'd want to test placeholder shapes to see if they're empty or not; they occupy space in editing views, but if empty, won't appear in printouts or slide shows.
I am writing some software that periodically checks a camera image to identify whether an object has been introduced to the viewed scene. I am using ImageMagick in my WinForms software to compare two images to create a third. In the third image, a pixel is white if the first two images had a similar-coloured pixel, and black if they were different. So if the user sees a group of black pixels, they will know something has been placed in the scene that wasn't there previously, as seen here:
The user will not be seeing this image, so I would like my software to identify this for me. I don't need anything too complex from this analysis - just a Boolean for whether something was changed in the scene.
In my mind, there are two approaches to analysing this; counting the number of black pixels in the image, or I could write some algorithm to identify patches of black. My question is about the second approach, since it feels like the more correct approach. The image is a bit too noisy (you can see false positives on straight edges) for me to feel entirely comfortable with counting.
To identify a group, I have considered using some for loops to look at the colours of pixels surrounding every pixel but this seems like it would take forever. My processing time can't take more than a few seconds so I need to be wary of this. Are there cleaner or more-efficient ways to identify groups of similarly-coloured pixels? Or will I need to run loops and be as efficient as possible?
Threshold the image such that black pixels will have value 1 non black will have zero.
Use connected component labeling to find all the groups of connected black pixels. http://www.imagemagick.org/script/connected-components.php
Filter out the components that are too small or doesn't have the correct shape(for example you can have a long line on the sides so they have a lot of black pixels but you are not expecting to see a long line as a valid group of black pixels)
At this point I assume you have a sense of the scale of the objects that you are interested in capturing. For example, if you did have an entirely non-black screen (let's ignore noise for the purpose of this discussion), except you have a black pixel object that is only roughly 10 pixels in diameter. This seems to be frightfully small, and not enough information to be useful.
Once you have determined what is the minimum size of black mass you are willing to accept, I would then go about querying a staggered matrix.
i.e., a pattern like:
Use a little math to determine what acceptable noise is.
Once you have positive results, (pixels = black), investigate in those sectors.
I have a bunch of images, that look like .
After processing, I want them to be like .
I know that I can easily make those black areas white using the Flood Fill algorithm. But first of all I need to make sure that the black area is not part of the text. How can I do that? Those areas are huge, comparing to letters. So maybe I can just find out the size of each black area, and make the areas which are bigger than n white?
That's all about machinevision.
You could write your own code for something like "Connected-Component-Labeling"
This is just one possible approach.
Therefore you could start at the top left corner and gather all pixel that have almost the same grey value. save the coordinates and fill this area if the array contains more pixel than a certain threshold.
But i think you ll have some probs with the black "line" in the middle.
We're currently creating a simple application for image manipulation in Silverlight, and we've hit a bit of a snag. We want users to be able to select an area of an image (either by drawing a freehand line around their chosen area or by creating a polygon around it), and then be able to apply effects to the pixels within that selection.
Creating a selection of images is easy enough, but we want a really fast algorithm for deciding which pixels should be manipulated (ie. something to detect which pixels are within the user's selection).
We've thought of three possibilities so far, but we're sure that there must be a really efficient and quick way of doing this that's better than these.
1. Pixel by pixel.
We just go through every pixel in an image and check whether it's within the user selection. Obviously this is far too slow!
2. Using a Line Crossing Algorithim.
The type of thing seen here.
3. Flood Fill.
Select the pixels along the path of the selection and then perform a flood fill within that selection. This might work fine.
This must a problem that's commonly solved, so we're guessing there's a ton more solutions that we've not even thought of.
What would you recommend?
Flood fill algorithm is a good choice.
Take a look at this implementation:
Queue-Linear Flood Fill: A Fast Flood Fill Algorithm
You should be able to use your polygon to create a clipping path. The mini-language for describing polygons for Silverlight is quiet well documented.
Alter the pixels on a copy of your image (all pixels is usually easy to modify than some pixels), then use the clipping path to render only the desired area of the changes back to the original image (probably using an extra buffer bitmap for the result).
Hope this helps. Just throwing the ideas out and see if any stick :)