Compare images in C# .NET - c#

I have a scanned image of a document which has multiple boxes which may or may not contain signatures. I am able to identify the boxes, but now I want to figure out which boxes contain signatures. I tried to compare the image with the reference blank box image. Ideally pixel match should do,but my images can be tilted by some angle, which makes it tough. I am programming in .NET.
Any suggestions?
Edited on Jan 04:
I have asked this question on Nov 25. At that time, the solution proposed was to check count the number of black pixels in the image. That worked for me. However, the performance of the application is bad now. Because, it has to check black pixels on 20 rectangles of 100*1000 size.
Is there any better to solution to determine if a image is blank?

Perhaps you could sum the number of pixels matching the 'blank' colour, and then sum the number of pixels not matching the blank colour. If the number of non-blank pixels is over a certain level, then assume that there is a signature? Logically, an empty box will contain almost entirely blank pixels, and a box with a signature in it will contain a lot less blank pixels.
Edit: One extra point - you will want to have a degree of tolerance for what is a 'blank' pixel colour, otherwise a bit of dust or gradient that arose while scanning will cause a non-blank pixel.

You should try to normalise the rotation of the images first. One way to do this is to place markers on the page which can be lined up (a black square in each corner of the page is what I have seen used before) to ensure that the page's rotation is correct before you try to identify the signatures.

Maybe the quickest way is to perform an MD5 Hash on the byte stream of the image and compare the results? Look here for further info on this.
Hope this helps,
Best regards,
Tom.

Related

Identify groups of similar-coloured pixels

I am writing some software that periodically checks a camera image to identify whether an object has been introduced to the viewed scene. I am using ImageMagick in my WinForms software to compare two images to create a third. In the third image, a pixel is white if the first two images had a similar-coloured pixel, and black if they were different. So if the user sees a group of black pixels, they will know something has been placed in the scene that wasn't there previously, as seen here:
The user will not be seeing this image, so I would like my software to identify this for me. I don't need anything too complex from this analysis - just a Boolean for whether something was changed in the scene.
In my mind, there are two approaches to analysing this; counting the number of black pixels in the image, or I could write some algorithm to identify patches of black. My question is about the second approach, since it feels like the more correct approach. The image is a bit too noisy (you can see false positives on straight edges) for me to feel entirely comfortable with counting.
To identify a group, I have considered using some for loops to look at the colours of pixels surrounding every pixel but this seems like it would take forever. My processing time can't take more than a few seconds so I need to be wary of this. Are there cleaner or more-efficient ways to identify groups of similarly-coloured pixels? Or will I need to run loops and be as efficient as possible?
Threshold the image such that black pixels will have value 1 non black will have zero.
Use connected component labeling to find all the groups of connected black pixels. http://www.imagemagick.org/script/connected-components.php
Filter out the components that are too small or doesn't have the correct shape(for example you can have a long line on the sides so they have a lot of black pixels but you are not expecting to see a long line as a valid group of black pixels)
At this point I assume you have a sense of the scale of the objects that you are interested in capturing. For example, if you did have an entirely non-black screen (let's ignore noise for the purpose of this discussion), except you have a black pixel object that is only roughly 10 pixels in diameter. This seems to be frightfully small, and not enough information to be useful.
Once you have determined what is the minimum size of black mass you are willing to accept, I would then go about querying a staggered matrix.
i.e., a pattern like:
Use a little math to determine what acceptable noise is.
Once you have positive results, (pixels = black), investigate in those sectors.

C# draw a string pixel perfect

I'm trying to draw a string using either textrenderer.drawtext, graphics.drawstring or graphicspath.addstring - the main purpose is to extract all fonts to bitmaps to edit them and use them as bitmaps with shaders in a game.
With textrenderer.drawtext and graphics.drawstring, I get a padding on top of varying degrees - so I try graphicspath.addstring. I extract the font family's ascent height and descent height, but they are wildly unusable with emheight. (using ascent and descent with emheight is how microsoft suggest you do what I am trying to do - via http://msdn.microsoft.com/en-us/library/xwf9s90b%28v=vs.110%29.aspx. Has anyone successfully ever draw pixel perfect fonts using C#? Every time I ever try or look it up, textrenderer and graphics always' padding always screwed up drawing and this new graphicspath method seems to have an issue with using a specific scale.
The usual methods using TextRenderer or MeasureString will give you a SizeF, containing the bounds of the string you measure. Most formats include a little slack so you can compose text by adding strings together.
The aim of theses methods is to help create blocks of text by letting you measure when a line will be full or how many pixels to advance for the next line.
They are not really meant for maesuring single characters.
For this there is a special stringformat GenericTypographic as described here which leaves out the white space.
To get an even more precise measurement one can use GraphicsPath.AddString and then GetBounds, maybe after switching antialias off..
Now, if you wanted to draw a single character precisely, say centered on a Button this would do the job.
But you know all that and your aim is different - if I understand you correctl,y you want to create Bitmaps from each character in order to later join them to form text. This means you need them to line up correctly vertically, ie sit on the same baseline.
The sizes of the characters don't help you here; now, normally you'd need the baseline of each charcater, which you don't get, at least not for anything descending like 'f' or even just ',' etc..
But it wouldn't help you either because in GDI you don't print/draw to the baseline anyway..
What you should do, imo is either draw one long string with all characters, so that they're all lined up right and then cut out the characters one by one. Or you could draw each character on its own, but suffix all or some characters you know to have ascenders and descenders and then only pick the first columns from the result.
So the only way I figured out how to do this is is to first draw the string to a graphicpath, then measure all the empty spots in the graphic path, and get it's height only after I've measure every spot, then redraw the string (I have an attempt counter to limit attempts but increase em to pixel accuracy) taking the old size and new size into account by a modifier and then extract the final size and store it.
Only I got to get around the BS of every font having a weird top padding that isn't associated with it's ascent and internal overflow (ex: Ñ), as well as descent, in refrence to a 0,0 point, this way.

Simple algorithm to crop empty borders from an image by code?

Currently I'm seeking for a rather fast and reasonably accurate algorithm in C#/.NET to do these steps in code:
Load an image into memory.
Starting from the color at position (0,0), find the unoccupied space.
Crop away this unnecessary space.
I've illustrated what I want to achieve:
What I can imagine is to get the color of the pixel at (0,0) and then do some unsafe line-by-line/column-by-column walking through all pixels until I meet a pixel with another color, then cut away the border.
I just fear that this is really really slow.
So my question is:
Are you aware of any quick algorithmns (ideally without any 3rd party libraries) to cut away "empty" borders from an in-memory image/bitmap?
Side-note: The algorithm should be "reasonable accurate", not 100% accurate. Some tolerance like one line too much or too few cropped would be way OK.
Addition 1:
I've just finished implementing my brute force algorithm in the simplest possible manner. See the code over at Pastebin.com.
If you know your image is centered, you might try walking diagonally ( ie (0,0), (1,1), ...(n,n) ) until you have a hit, then backtrack one line at a time checking until you find an "empty" line (in each dimension). For the image you posted, it would cut a lot of comparisons.
You should be able to do that from 2 opposing corners concurrently to get some multi-core action.
Of course, hopefully you dont it the pathelogical case of 1 pixel wide line in the center of the image :) Or the doubly pathological case of disconnected objects in your image such that the whole image is centered, but nothing crosses the diagonal.
One improvement you could make is to give your "hit color" some tolerance (adjustable maybe?)
The algorithm you are suggesting is a brute force algorithm and will work all the time for all type of images.
but for special cases like, subject image is centered and is a continuous blob of colors (as you have displayed in your example), binary sort kind of algorithm can be applied.
start from center line (0,length/2) and start in one direction at a time, examine the lines as we do in binary search.
do it for all the sides.
this will reduce complexity to log n to the base 2
For starters, your current algorithm is basically the best possible.
If you want it to run faster, you could code it in c++. This tends to be more efficient than managed unsafe code.
If you stay in c#, you can parallel extensions to run it on multiple cores. That wont reduce the load on the machine but it will reduce the latency, if any.
If you happen to have a precomputed thumbnail for the image, you can apply your algo on the thumbnail first to get a rough idea.
First, you can convert your bitmap to a byte[] using LockBits(), this will be much faster than GetPixel() and won't require you to go unsafe.
As long as you don't naively search the whole image and instead search one side at a time, you nailed the algorithm 95%. Just make you are not searching already cropped pixels, as this might actually make the algorithm worse than the naive one if you have two adjacent edges that crop a lot.
A binary search can improve a tiny bit, but it's not that significant as it will maybe save you a line of search for each direction in the best case scenario.
Although i prefer the answer of Tarang, i'd like to give some hints on how to 'isolate' objects in an image by refering to a given foregroundcolor and backgroundcolor, which is called 'segmentation' and used when working in the field of 'optical inspection', where an image is not just cropped to some detected object but objects are counted and also measured, things you can measure on an object is area, contour, diameter etc..
First of all, usually you'll start really to walk through your image beginning at x/y coordinates 0,0 and walk from left to right and top to bottom until you'll find a pixel that has another value as the background. The sensitivity of the segmentation is given by defining the grayscale value of the background as well as the grayscale value of the foreground. You possibly will walk through the image as said, by coordinates, but from the programs view you'll just walk through an array of pixels. That means you'll have to deal with the formula that calculates the x/y coordinate to the pixel's index in the pixel array. This formula sure needs width and height of the image.
For your concern of cropping, i think when you've found the so called 'pivot point' of your foreground object, you'll usually walk along the found object by using a formula that detects neighbor pixels of the same foregeground value. If there is only one object to detect as in your case, it's easy to store those pixels coordinates that are north-most, east-most, south-most and west-most. These 4 coordinates mark the rectangle your object fits in. With this information you can calculate the new images (cropped image) width and height.

How can I convert an image to an ASCII representation of the image?

My idea is to grab each pixel, analyze the 255,255,255 value and give each pixel a chance to be in only 1 of 10 division I will have laid out.
This won't bring a full color representation, but my point is to make a ASCII that at least resemble the shapes of the objects in the pictures. Outline it so to speak.
Would this work?
If I've understood the process correctly the effect would similar to the "posterize" option you get in some paint packages where the number of colours is reduced to some arbitrary number.
It would work, here's a complete example in c#
http://www.codeproject.com/KB/web-image/AsciiArt.aspx
Most implementations of don't convert one pixel to one character. Instead they divide the image into rectangular regions of pixels, and then analyse each region. As well as the overall intensity of the region, you also want to look at how the intensity is spread over the region and pick a matching character.
If you limit your character set to just 10 characters however, you probably won't get a particularly good representation of the original image.

Find image position inside a larger image

Basically I want to find the pixel location of a small image inside a large image.
I have searched for something similar to this but have had no luck.
It depends on how similar you want the result to match your query image. If you're trying to match corresponding parts of different photorealistic images, take a look at the Feature detection Wikipedia page. What you want to use depends on the transformation you expect one image to undergo to become the other.
That said, if you are looking for an exact pixel-by-pixel match, a brute-force search is probably bad. That can be O(m^2*n^2) for an m*m image used to search within an n*n image. Using better algorithms, it can be improved to O(n^2), linear in the number of pixels. Downsampling both images and doing a hierarchical kind of search might be a good approach.
You could probably use the AForge Framework to do something like this. It offers a variety of image processing tools. Possibly you could use their blob extraction to extracts blobs then compare those blobs to a stored image you have and see if they match.
If the images are pixel-by-pixel equal, you could start by searching for one pixel that has the same color as pixel (0,0) in the small image. Once found, compare each pixel in the area that would be covered by the small image. If there are no differences you found your position. Else start over by searching for the next pixel matching (0,0).
Booyer-Moore search sounds like a solution here if you treat your pixels as characters and are looking for an exact match. Much faster than per pixel searching as well.

Categories

Resources