is there any way to convert RGB image to grayscale using method that will make me able to convert the grayscale to RGB again (lossless method) ??
Yes and no. If you mean, "Can I destroy information and then later successfully retrieve it?" the answer is "No." If you mean, "Can I hide information and then later successfully retrieve it?" the answer is "Yes." Depending on the image's file format, there may be ways to embed the original RGB pixel data as a header that could later be read by custom software that you write.
Or I suppose you could save two files (original and grayscale) to the filesystem and link them in some looser way, like applying a file naming scheme that your custom software understands, or keeping a database that tells your custom software where to find the original and grayscale images. There are lots of ways you could choose to keep track of the connection between a grayscale image and its RGB source.
But if you mean, "Can I look at a single grayscale value for a pixel and clairvoyantly infer which three values (r, g, and b) were averaged (or otherwise mathematically combined) to derive that single grayscale value?" then the answer is "No." To prove this to yourself, consider that (10 + 10 + 10) / 3 is equal to (5 + 10 + 15) / 3. Given only the average value (10), can you reliably guess which three numbers were averaged to produce it? No.
I've a image like this (white background and black text). If there is not noise (as you can see: the top and bottom of number line has many noise), Tesseract can recognize number very good.
But when has noise, Tesseract try to recognize it as number and add more number to result. It is really bad. How can I make Tesseract Ignore Noise? I can't make a preprocessing image to make it more contrast or sharp text. This doesn't help anything.
If some tool can to hightlight only string line. It can be really good input to Tesseract. Please help me. Thanks everybody.
You should try eroding and dilating:
The most basic morphological operations are two: Erosion and Dilation.
They have a wide array of uses, i.e. :
Removing noise
...
you could try to down sample your binary image and sample it up again (pyrDown and PyrUp) or you could try to smooth your image with an gaussian blur. And, as already suggested, erode and dilate your image.
I see 3 solutions for your problem:
As already sugested - try using erode and dilate or some kind of blur. It's the simplest solution.
Find all contours (findContours function) and then delete all contours with area less then some value (try different values, you should find correct one quite fast). Note that the value may not be constant - for example you can try to use 80% of average contour area (just add all contours areas, divide it by number of contours and multiply by 0.8).
Find all contours. Create one dimension array of integers, with length equal to your image height. Fill array with zeros. Now for each contour:
I. Find the top and the bottom point (points with the biggest and the smallest value of y coordinate). Let's name this points T and B.
II. Add one to all elements of array which index is between B.y and T.y. (so if B = (1, 4) and T = (3, 11) then add one to array[4], array[5], array[6] ..., array[11]).
Find the biggest element of array. Let's name this value v. All contours for which B.y <= v <= T.y should be letters, other contours - noise.
you can easily remove these noises by using image processing techniques(Morphological operations like erode and dilate) you can choose opencv for this operations.
Do connected component labeling....that is blob counting....all dose noises can never match the size of the numbers....with morphological techniques the numbers also get modified...label the image...count the number of pixels in each labeled region and set a threshold (which you can easily set as you will only have numbers and noises)...cvblob is the library written in C++ available at code googles...
I had similar problem: small noises was cause of tesseract fails. I cannot use open-cv, because I was developing some feature on android, and open-cv was unwanted because of it large size. I don't know if this solution is good, but here is what I did.
I found all black regions in image (points of each region I added to own region set). Then, I check if count of point in this region is bigger than some threshold, like 10, 25 and 50. If true, I make white all points of that region.
I have a problem. My company has given me an awfully boring task. We have two databases of dialog boxes. One of these databases contains images of horrific quality, the other very high quality.
Unfortunately, the dialogs of horrific quality contain important mappings to other info.
I have been tasked with, manually, going through all the bad images and matching them to good images.
Would it be possible to automate this process to any degree? Here is an example of two dialog boxes (randomly pulled from Google images) :
So I am currently trying to write a program in C# to pull these photos from the database, cycle through them, find the ones with common shapes, and return theird IDs. What are my best options here ?
I really see no reason to use any external libraries for this, I've done this sort of thing many times and the following algorithm works quite well. I'll assume that if you're comparing two images that they have the same dimensions, but you can just resize one if they don't.
badness := 0.0
For x, y over the entire image:
r, g, b := color at x,y in image 1
R, G, B := color at x,y in image 2
badness += (r-R)*(r-R) + (g-G)*(g-G) + (b-B)*(b-B)
badness /= (image width) * (image height)
Now you've got a normalized badness value between two images, the lower the badness, the more likely that the images match. This is simple and effective, there are a variety of things that make it work better or faster in certain cases but you probably don't need anything like that. You don't even really need to normalize the badness, but this way you can just come up with a single threshold for it if you want to look at several possible matches manually.
Since this question has gotten some more attention I've decided to add a way to speed this up in cases where you are processing many images many times. I used this approach when I had several tens of thousands of images that I needed to compare, and I was sure that a typical pair of images would be wildly different. I also knew that all of my images would be exactly the same dimensions. In a situation in which you are comparing dialog boxes your typical images may be mostly grey-ish, and some of your images may require resizing (although maybe that just indicates a mis-match), in which case this approach may not gain you as much.
The idea is to form a quad-tree where each node represents the average RGB values of the region that node represents. So an 4x4 image would have a root node with RGB values equal to the average RGB value of the image, its children would have RGB values representing the average RGB value of their respective 2x2 regions, and their children would represent individual pixels. (In practice it is a good idea to not go deeper than a region of about 16x16, at that point you should just start comparing individual pixels.)
Before you start comparing images you will also need to decide on a badness threshold. You won't calculate badnesses above this threshold with any reliable accuracy, so this is basically the threshold at which you are willing to label an image as 'not a match'.
Now when you compare image A to image B, first compare the root nodes of their quad-tree representations. Calculate the badness just as you would for a single pixel image, and if the badness exceeds your threshold then return immediately and report the badness at this level. Because you are using normalized badnesses, and since badnesses are calculated using squared differences, the badness at any particular level will be equal to or less than the badness at lower levels, so if it exceeds the threshold at any points you know it will also exceed the threshold at the level of individual pixels.
If the threshold test passes on an nxn image, just drop to the next level down and compare it like it was a 2nx2n image. Once you get low enough just compare the individual pixels. Depending on your corpus of images this may allow you to skip lots of comparisons.
I would personally go for an image hashing algorithm.
The goal of image hashing is to transform image content into a feature sequence, in order to obtain a condensed representation.
This feature sequence (i.e. a vector of bits) must be short enough for fast matching and preserve distinguishable features for similarity measurement to be feasible.
There are several algorithms that are freely available through open source communities.
A simple example can be found in this article, where Dr. Neal Krawetz shows how the Average Hash algorithm works:
Reduce size. The fastest way to remove high frequencies and detail is to shrink the image. In this case, shrink it to 8x8 so that there are 64 total pixels. Don't bother keeping the aspect ratio, just crush it down to fit an 8x8 square. This way, the hash will match any variation of the image, regardless of scale or aspect ratio.
Reduce color. The tiny 8x8 picture is converted to a grayscale. This changes the hash from 64 pixels (64 red, 64 green, and 64 blue) to 64 total colors.
Average the colors. Compute the mean value of the 64 colors.
Compute the bits. This is the fun part. Each bit is simply set based on whether the color value is above or below the mean.
Construct the hash. Set the 64 bits into a 64-bit integer. The order does not matter, just as long as you are consistent. (I set the bits from left to right, top to bottom using big-endian.)
David Oftedal wrote a C# command-line application which can classify and compare images using the Average Hash algorithm.
(I tested his implementation with your sample images and I got a 98.4% similarity).
The main benefit of this solution is that you read each image only once, create the hashes and classify them based upon their similiarity (using, for example, the Hamming distance).
In this way you decouple the feature extraction phase from the classification phase, and you can easily switch to another hashing algorithm if you find it's not enough accurate.
Edit
You can find a simple example here (It includes a test set of 40 images and it gets a 40/40 score).
Here's a topic discussing image similarity with algorithms, already implemented in OpenCV library. You should have no problem importing low-level functions in your C# application.
The Commercial TinEye API is a really good option.
I've done image matching programs in the past and Image Processing technology these days is amazing, its advanced so much.
ps here's where those two random pics you pulled from google came from: http://www.tineye.com/search/1ec9ebbf1b5b3b81cb52a7e8dbf42cb63126b4ea/
Since this is a one-off job, I'd make do with a script (choose your favorite language; I'd probably pick Perl) and ImageMagick. You could use C# to accomplish the same as the script, although with more code. Just call the command line utilities and parse the resulting output.
The script to check a pair for similarity would be about 10 lines as follows:
First retrieve the sizes with identify and check aspect ratios nearly the same. If not, no match. If so, then scale the larger image to the size of the smaller with convert. You should experiment a bit in advance with filter options to find the one that produces the most similarity in known-equivalent images. Nine of them are available.
Then use the compare function to produce a similarity metric. Compare is smart enough to deal with translation and cropping. Experiment to find a similarity threshold that doesn't provide too many false positives.
I would do something like this :
If you already know how the blurred images have been blurred, apply the same function to the high quality images before comparison.
Then compare the images using least-squares as suggested above.
The lowest value should give you a match. Ideally, you would get 0 if both images are identical
to speed things up, you could perform most comparison on downsampled images then refine on a selected subsample of the images
If you don't know, try various probable functions (JPEG compression, downsampling, ...) and repeat
You could try Content-Based Image Retrieval (CBIR).
To put it bluntly:
For every image in the database, generate a fingerprint using a
Fourier Transform
Load the source image, make a fingerprint of the
image
Calculate the Euclidean Distance between the source and all
the images in the database
Sort the results
I think a hybrid approach to this would be best to solve your particular batch matching problem
Apply the Image Hashing algorithm suggested by #Paolo Morreti, to all images
For each image in one set, find the subset of images with a hash closer that a set distance
For this reduced search space you can now apply expensive matching methods as suggested by #Running Wild or #Raskolnikov ... the best one wins.
IMHO, best solution is to blur both images and later use some similarity measure (correlation/ mutual information etc) to get top K (K=5 may be?) choices.
If you extract the contours from the image, you can use ShapeContext to get a very good matching of images.
ShapeContext is build for this exact things (comparing images based on mutual shapes)
ShapeContext implementation links:
Original publication
A goot ppt on the subject
CodeProject page about ShapeContext
*You might need to try a few "contour extraction" techniques like thresholds or fourier transform, or take a look at this CodeProject page about contour extraction
Good Luck.
If you calculate just pixel difference of images, it will work only if images of the same size or you know exactly how to scale it in horizontal and vertical direction, also you will not have any shift or rotation invariance.
So I recomend to use pixel difference metric only if you have simplest form of problem(images are the same in all characteristics but quality is different, and by the way why quality is different? jpeg artifacts or just rescale?), otherwise i recommend to use normalized cross-correlation, it's more stable metric.
You can do it with FFTW or with OpenCV.
If bad quality is just result of lower resolution then:
rescale high quality image to low quality image resolution (or rescale both to equal low resolution)
compare each pixel color to find closest match
So for example rescaling all of images to 32x32 and comparing that set by pixels should give you quite reasonable results and its still easy to do. Although rescaling method can make difference here.
You could try a block-matching algorithm, although I'm not sure its exact effectiveness against your specific problem - http://scien.stanford.edu/pages/labsite/2001/ee368/projects2001/dropbox/project17/block.html - http://www.aforgenet.com/framework/docs/html/05d0ab7d-a1ae-7ea5-9f7b-a966c7824669.htm
Even if this does not work, you should still check out the Aforge.net library. There are several tools there (including block matching from above) that could help you in this process - http://www.aforgenet.com/
I really like Running Wild's algorithm and I think it can be even more effective if you could make the two images more similar, for example by decreasing the quality of the better one.
Running Wild's answer is very close. What you are doing here is calculating the Peak Signal to Noise Ratio for each image, or PSNR. In your case you really only need the Mean Squared Error, but the squaring component of it helps a great deal in calculating difference between images.
PSNR Reference
Your code should look like:
sum = 0.0
for(imageHeight){
for(imageWidth){
errorR = firstImage(r,x,y) - secondImage(r,x,y)
errorG = firstImage(g,x,y) - secondImage(g,x,y)
errorB = firstImage(b,x,y) - secondImage(b,x,y)
totalError = square(errorR) + square(errorG) + square(errorB)
}
sum += totalError
}
meanSquaredError = (sum / (imageHeight * imageWidth)) / 3
I asume the images from the two databases show the same dialog and that the images should be close to identical but of different quality? Then matching images will have same (or very close to same) aspect ratio.
If the low quality images were produced from the high quality images (or equivalent image), then you should use the same image processing procedure as a preprocessing step on the high quality image and match with the low quality image database. Then pixel by pixel comparison or histogram matching should work well.
Image matching can use a lot of resources if you have many images. Maybe a multipass approach is a good idea? For example:
Pass 1: use simple mesures like aspect ratio to groupe images (width and height fields in db?) (computationally cheap)
Pass 2: match or groupe by histogram for 1st-color-channel (or all channels) (relatively computationally cheap)
I will also recommend OpenCV. You can use it with c,c++ and Python (and soon Java).
Just thinking out loud:
If you use two images that should be compared as layers and combine these (subtract one from the other) you get a new image (some drawing programs can be scripted to do batch conversion, or you could use the GPU by writing a tiny DirectX or OpenGL program)
Next you would have to get the brightness of the resulting image; the darker it is, the better the match.
Have you tried contour/thresholding techniques in combination with a walking average window (for RGB values ) ?
I need to do a "fuzzy" image comparison in c# - I have used ImageMagick.NET for stuff in the past and know it's good for the job.
There is a compare command in Image Magick: http://www.imagemagick.org/script/compare.php
And there is a Compare(Image reference) method in ImageMagick.NET however it seems that it's be hugely simplified so there is no way of getting at the verbose output.
I need to be able to get at that so I can match the images using a threshold. Am I missing something - is there a way to get this stuff into ImageMagick.NET if there isn't already? (I'm no C++ dev by a long shot) or am I barking up the wrong tree?
Pardon me if I don't get your question, but won't IsImagesEqual or SimilarityImage work?
IsImagesEqual returns "The normalized maximum quantization error for any single pixel in the image. This distance measure is normalized to a range between 0 and 1. It is independent of the range of red, green, and blue values in your image.
A small normalized mean square error, accessed as image->normalized_mean_error, suggests the images are very similar in spatial layout and color."
The corresponding method in the .NET bindings is Image.Compare which takes an image and returns a bool. However, if the result is false - the mean error (according to the metric above) is set on the current instance's meanErrorPerPixel, normalizedMaxError, and normalizedMeanError.
Aren't these three metrics enough to give you the result of your "fuzzy" compare?
I have a C# app for which I've written GDI+ code that uses Bitmap/TextureBrush rendering to present 2D images, which can have various image processing functions applied. This code is a new path in an application that mimics existing DX9 code, and they share a common library to perform all vector and matrix (e.g. ViewToWorld/WorldToView) operations. My test bed consists of DX9 output images that I compare against the output of the new GDI+ code.
A simple test case that renders to a viewport that matches the Bitmap dimensions (i.e. no zoom or pan) does match pixel-perfect (no binary diff) - but as soon as the image is zoomed up (magnified), I get very minor differences in 5-10% of the pixels. The magnitude of the difference is 1 (occasionally 2)/256. I suspect this is due to interpolation differences.
Question: For a DX9 ortho projection (and identity world space), with a camera perpendicular and centered on a textured quad, is it reasonable to expect DirectX.Direct3D.TextureFilter.Linear to generate identical output to a GDI+ TextureBrush filled rectangle/polygon when using the System.Drawing.Drawing2D.InterpolationMode.Bilinear setting?
For this (magnification) case, the DX9 code is using this (MinFilter,MipFilter set similarly):
Device.SetSamplerState(0, SamplerStageStates.MagFilter, (int)TextureFilter.Linear);
and the GDI+ path is using:
g.InterpolationMode = InterpolationMode.Bilinear;
I thought that "Bilinear Interpolation" was a fairly specific filter definition, but then I noticed that there is another option in GDI+ for "HighQualityBilinear" (which I've tried, with no difference - which makes sense given the description of "added prefiltering for shrinking")
Followup Question: Is it reasonable to expect pixel-perfect output matching between DirectX and GDI+ (assuming all external coordinates passed in are equal)? If not, why not?
Clarification: The images I'm using are opaque grayscale (R=G=B, A=1) using Format32bppPArgb.
Finally, there are a number of other APIs I could be using (Direct2D, WPF, GDI, etc.) - and this question generally applies to comparing the output of "equivalent" bilinear interpolated output images across any two of these. Thanks!
DirectX runs mostly in the GPU and DX9 may be running shaders. GDI+ runs on completely different algorithms. I don't think it is reasonable to expect the two to come up with exactly pixel-matching outputs.
I'd expect DX9 to have better quality than GDI+, which is a step improvement over the old GDI but not much. GDI+ is long understood to have trouble with anti-aliasing lines and also with preserving quality in image scaling (which seems to be your problem). In order to have something similar in quality than latest-generation GPU texture processing, you'll need to move to WPF graphics. This gives quality similar to DX.
WPF also uses the GPU (if available) and falls back to software rendering (if no GPU), therefore the output between GPU and software rendering are reasonably close.
EDIT: Although this has been picked as the answer, it is only an early attempt to explain and doesn't really address the true reason. The reader is referred to discussions laid out in the comments to the question and to the answers instead.
Why do you make the assumption that they use the same formula?
Even if they do use the same formula and you accept that the implementation is different would you expect the output to be the same?
At the end of the day the code is designed to work with perception not be mathematically precise. Although you can get this with CUDA if you want.
Rather than being suprised that you get different results i would be very suprised if you got pixel perfect matches.
the way they represent colour is different ... I know for a fact nvidia uses a float(maybe double) to represent colour wheras GDI uses int i believe.
http://en.wikipedia.org/wiki/GPGPU
In DX9 shader 2.0 appears which is when the implementation of colour switched from int to 24 and 32 bit floats.
try comparing ati/amd rendering to nvidia rendering and you can clearly see that colour is very different.
I first noticed this in quake 2 ... the difference between the 2 cards was staggering - of course that is due to a great many number of things, least of which is their bilinier interp implementation.
EDIT: the info about how the specification was made happeend after i answered. Anyway i think the datatypes used to store it iwll be different no matter how you specify it. Moreover the implementation of float is likley to be different. I may be wrong but im pretty sure that c# implements float differently to the C compiler that nvidia uses. (and that assumes that GDI+ doesnt just convert the float into the equivalent int ....)
Even if i am wrong about that I would enerally hold it to be exceptional to expect 2 different implementations of an algorithm to be identical. they are optomised for speed as a result the difference in optomisation will directly translate to a difference in image quality as this speed will come from a different approach to cutting corners/approximation.
There are two possibilities for round-off differences. The first is obvious, when the RGB value is calculated as a fraction of the values on either side. The second is more subtle, when calculating the ratio to use when determining the fraction between the two points.
I'd actually be very surprised if two different implementations of the algorithm were pixel-for-pixel identical, there's so many places for a +/-1 difference to occur. Without getting the exact implementation details of both methods it's impossible to be more precise than that.