Measure difference of two images using emgucv - c#

i need to compare two images and identify differences on them as percentage. "Absdiff" function on emgucv doesn't help with that. i already done that compare example on emgucv wiki. what i exactly want is how to get two image difference in numerical format?
//emgucv wiki compare example
//acquire the frame
Frame = capture.RetrieveBgrFrame(); //aquire a frame
Difference = Previous_Frame.AbsDiff(Frame);
//what i want is
double differenceValue=Previous_Frame."SOMETHING";
if you need more detail plz ask.
Thanks in advance.

EmguCV MatchTemplate based comparison
Bitmap inputMap = //bitmap source image
Image<Gray, Byte> sourceImage = new Image<Gray, Byte>(inputMap);
Bitmap tempBitmap = //Bitmap template image
Image<Gray, Byte> templateImage = new Image<Gray, Byte>(tempBitmap);
Image<Gray, float> resultImage = sourceImage.MatchTemplate(templateImage, Emgu.CV.CvEnum.TemplateMatchingType.CcoeffNormed);
double[] minValues, maxValues;
Point[] minLocations, maxLocations;
resultImage.MinMax(out minValues, out maxValues, out minLocations, out maxLocations);
double percentage = maxValues[0] * 100; //this will be percentage of difference of two images
The two images need to have the same width and height or MatchTemplate will throw an exception. In case if we want to have an exact match.
Or
The template image should be smaller than the source image to get a number of occurrence of the template image on the source image

EmguCV AbsDiff based comparison
Bitmap inputMap = //bitmap source image
Image<Gray, Byte> sourceImage = new Image<Gray, Byte>(inputMap);
Bitmap tempBitmap = //Bitmap template image
Image<Gray, Byte> templateImage = new Image<Gray, Byte>(tempBitmap);
Image<Gray, byte> resultImage = new Image<Gray, byte>(templateImage.Width,
templateImage.Height);
CvInvoke.AbsDiff(sourceImage, templateImage, resultImage);
double diff = CvInvoke.CountNonZero(resultImage);
diff = (diff / (templateImage.Width * templateImage.Height)) * 100; // this will give you the difference in percentage
As per my experience, this is the best method compared to MatchTemplate based comparison. Match template failed to capture very minimal changes in two images.
But AbsDiff will be able to capture very small difference as well

Related

similarity percentage of two images by EMGU.CV

I work on x-ray images, and i want to get the similarity percentage between two monochrome images using emgu.cv library on c#.
the attached file contain the two images which i need to find the similarity percentage.
any one help me to find the solution of that by machine learning or any another approch?
EMGU.CV is same library in C# for openCV in C++ and python
you can use matchtemplate
In openCV it is
Mat image1Img = imread("image1.png", IMREAD_COLOR);
Mat image2Img = imread("image2.png", IMREAD_COLOR);
Mat image3Img = imread("image3.png", IMREAD_COLOR);
Mat scoreImg;
double maxScore;
matchTemplate(image1Img, image2Img, scoreImg, TM_CCOEFF_NORMED);
minMaxLoc(scoreImg, 0, &maxScore);
wxLogMessage(wxString::Format("score <%.2f>", maxScore));
In C# you can use as
Image<Gray, Byte> sourceImage = new Image<Gray, Byte>(#"Images/Source.bmp");
Image<Gray, Byte> templateImage = new Image<Gray, Byte>(#"Images/Template.bmp");
Image<Gray, float> resultImage = sourceImage.MatchTemplate(templateImage, Emgu.CV.CvEnum.TM_TYPE.CV_TM_CCOEFF_NORMED);
Detail can be checked here

How to detect template image in other image when angle different?

I'm new with image processing and stuck on matching different angled images.
I'm trying to detect selected template image in captured camera image. If template image and are exactly same angle everything goes well. When both image angles different, image matching fails.
I used EmguCV to match 2 images.
what i need to use to match two image when different angle?
This is both image same angled. https://imgur.com/K6bUAZp
This is both image different angled. https://imgur.com/qatg2CV
Image<Bgr, byte> source = new Image<Bgr, byte>(grayMain); // Camera image
Image<Bgr, byte> template = new Image<Bgr, byte>(FrameImage); // Template image
Image<Bgr, byte> lastImage = source.Copy();
using (Image<Gray, float> result = source.MatchTemplate(template, TemplateMatchingType.CcoeffNormed))
{
double[] minVal, maxVal;
System.Drawing.Point[] minLocations, maxLocations;
result.MinMax(out minVal, out maxVal, out minLocations, out maxLocations);
if (maxVal[0] > 0.75)
{
Rectangle match = new Rectangle(maxLocations[0], template.Size);
lastImage.Draw(match, new Bgr(Color.Red), 3);
}
}
pictureBox.Image = lastImage.Bitmap;
I solved my problem with serch rectangle in camera image and crop this image with detected rectangle using AForge.QuadrilateralTransformation. And then using last images (template and cropped image) for matching.
This is after crop image -> https://imgur.com/5JqAL5J
After croping red rectangle and doing image matching resulted this image -> https://imgur.com/Sva3MzO
Hope this help.

Emgucv turn the black background transparent

I'm very new to Emgucv, so need a little help?
The code below is mainly taken from various places from Google. It will take a jpg file (which has a green background) and allow, from a separate form to change the values of h1 and h2 settings so as to create (reveal) a mask.
Now what I want to be able to do with this mask is to turn it transparent.
At the moment it will just display a black background around a person (for example), and then saves to file.
I need to know how to turn the black background transparent, if this is the correct way to approach this?
Thanks in advance.
What I have so far is in C# :
imgInput = new Image<Bgr, byte>(FileName);
Image<Hsv, Byte> hsvimg = imgInput.Convert<Hsv, Byte>();
//extract the hue and value channels
Image<Gray, Byte>[] channels = hsvimg.Split(); // split into components
Image<Gray, Byte> imghue = channels[0]; // hsv, so channels[0] is hue.
Image<Gray, Byte> imgval = channels[2]; // hsv, so channels[2] is value.
//filter out all but "the color you want"...seems to be 0 to 128 (64, 72) ?
Image<Gray, Byte> huefilter = imghue.InRange(new Gray(h1), new Gray(h2));
// TURN IT TRANSPARENT somewhere around here?
pictureBox2.Image = imgInput.Copy(mask).Bitmap;
imgInput.Copy(mask).Save("changedImage.png");
I am not sure I really understand what you are trying to do. But a mask is a binary object. A mask is usually black for what you do not want and white for what you do. As far as I know, there is no transparent mask, as to me that makes no sense. Masks are used to extract parts of an image by masking out the rest.
Maybe you could elaborate on what it is you want to do?
Doug
I think I may have the solution I was looking for. I found some code on stackoverflow which I've tweaked a little :
public Image<Bgra, Byte> MakeTransparent(Image<Bgr, Byte> image, double r1, double r2)
{
Mat imageMat = image.Mat;
Mat finalMat = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 4);
Mat tmp = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 1);
Mat alpha = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 1);
CvInvoke.CvtColor(imageMat, tmp, ColorConversion.Bgr2Gray);
CvInvoke.Threshold(tmp, alpha, (int)r1, (int)r2, ThresholdType.Binary);
VectorOfMat rgb = new VectorOfMat(3);
CvInvoke.Split(imageMat, rgb);
Mat[] rgba = { rgb[0], rgb[1], rgb[2], alpha };
VectorOfMat vector = new VectorOfMat(rgba);
CvInvoke.Merge(vector, finalMat);
return finalMat.ToImage<Bgra, Byte>();
}
I'm now looking at adding SmoothGaussian to the mask to create a kind on blend, where the two images are layered, rather than a sharp cut-out.

Emgu image conversion from Image<Gray,float> to Image<Gray,Byte> results in intensity loss?

We are performing image sharpening of a gray scale image of type Image by subtracting the Laplacian of the image from the original image. The result, if saved as a JPEG, has well defined edges and contrast. However, if the resultant image is converted to Bitmap OR "Image<Gray, Byte>"
and saved as JPEG, the intensity is reduced and the sharpening effect is lost. I suspected that converting to Bitmap may be causing this problem. So, I saved some of the intermediate images and also converted the image to "Image<Gray,Byte>". This did not help. I also tried to scale the image using a simple method. This too did not help.
The above behaviour is also true when we perform Laplace and subtract the resultant image from the original image. Illustrations are below (code has been modified for simplicity):
...
Image<Gray, Byte> sharpenedImage = Sharpen(filter, originalprocessedImage);
ProcessedImage = sharpenedImage.ToBitmap(); // Or ProcessedImage.Bitmap;
ProcessedImage.Save("ProcessedImage.jpg"); // results in intensity loss
...
public Image<Gray, Byte> Sharpen(Image<Gray, Byte> inputFrame)
{
ConvolutionKernelF Sharpen1Kernel = new ConvolutionKernelF (new float[,] { { -1,-1,-1 }, { -1, 8,-1 }, { -1,-1,-1 } });
Image<Gray, float> newFloatImage = inputFrame.Convert<Gray, float>();
Image<Gray, float> newConvolutedImage = newFloatImage.Convolution(Sharpen1Kernel);
Image<Gray, float> convolutedScaledShiftedImage = newFloatImage.AddWeighted(newConvolutedImage, 1.0, 1.0, 0);
// added for testing
convolutedScaledShiftedImage .Save("ConvolutedScaledShiftedImage .jpg");
//Now try to scale and save:
Image<Gray, float> scaledImageFloat = convolutedScaledAddedImage.Clone();
Image<Gray, float> scaledImageFloat2 = ScaleImage(scaledImageFloat);
// added for testing
scaledImageFloat.Save("ScaledImage.jpg");
// added for testing
scaledImageFloat2.Convert<Gray,Byte>().Save("ScaledImage-8Bits.jpg");
// both of these return the images of lower intensity
return scaledImageFloat2.Convert<Gray,Byte>();
return convolutedScaledShiftedImage.Convert<gray,Byte>();
}
While the ConvolutedScaledShifteImage.jpeg is brighter and with better contrast, "ScaledImage.jpeg" and "ScaledImage-8Bits.jpeg" have lost the intensity levels as compared to ConvolutedScaledShifteImage.jpeg. The same is true for ProcessedImage.jpeg.
The ScaleImage is below. This was not really necessary. As the Convert was losing intensity, I tried to do the conversion and check:
Image<Gray, float> ScaleImage(Image<Gray, float> inputImage)
{
double[] minValue;
double[] maxValue;
Point[] minLocation;
Point[] maxLocation;
Image<Gray, float> scaledImage = inputImage.Clone();
scaledImage.MinMax(out minValue, out maxValue, out minLocation, out maxLocation);
double midValue = (minValue[0] + maxValue[0] ) / 2;
double rangeValue = (maxValue[0]) - (minValue[0]);
double scaleFactor = 1 / rangeValue;
double shiftFactor = midValue;
Image<Gray, float> scaledImage1 = scaledImage.ConvertScale<float>(1.0, Math.Abs(minValue[0]));
Image<Gray, float> scaledImage2 = scaledImage1.ConvertScale<float>(scaleFactor * 255, 0);
return scaledImage2;
}
Would anybody be able to suggest what could be going wrong and why the intensities are lost in the above operations? Thanks.
Edit: fixed the formatting issue... conversion was from Image<Gray, float> to Image<Gray, Byte>
Edit 12-Jan: I further dug into OpenCV code and as I understand, when you save an image of type Image<Gray,float> to JPEG, imwrite() first converts the image to an 8-bit image image.convertTo( )temp, CV_8U ); and writes to the file. When the same operation is performed with Convert<Gray,Byte>(), the intensities are not the same. So, It is not clear what is the difference between the two.

C# EmguCV/OpenCV "cvThreshold" abnormal behaviour - no expected threshold result

There is a task of recognition red areas on an image and it requires maximum of accuracy. But the quality of a source image is quite bad. I'm trying to minimize a noise on a mask with detected red areas using cvThreshold. Unfortunately, there is no expected effect - gray artifacts stay.
//Converting from Bgr to Hsv
Image<Hsv, Byte> hsvimg = markedOrigRecImg.Convert<Hsv, Byte>();
Image<Gray, Byte>[] channels = hsvimg.Split();
Image<Gray, Byte> hue = channels[0];
Image<Gray, Byte> saturation = channels[1];
Image<Gray, Byte> value = channels[2];
Image<Gray, Byte> hueFilter = hue.InRange(new Gray(0), new Gray(30));
Image<Gray, Byte> satFilter = saturation.InRange(new Gray(100), new Gray(255));
Image<Gray, Byte> valFilter = value.InRange(new Gray(50), new Gray(255));
//Mask contains gray artifacts
Image<Gray,Byte> mask = (hueFilter.And(satFilter)).And(valFilter);
//Gray artifacts stays even if threshold (the third param.) value is 0...
CvInvoke.cvThreshold(mask, mask, 100, 255, THRESH.CV_THRESH_BINARY);
mask.Save("D:/img.jpg");
In the same time here it works fine - saved image is purely white:
#region test
Image<Gray,Byte> some = new Image<Gray, byte>(mask.Size);
some.SetValue(120);
CvInvoke.cvThreshold(some, some, 100, 255, THRESH.CV_THRESH_BINARY);
some.Save("D:/some.jpg");
#endregion
Mask before threshold example:
http://dl.dropbox.com/u/52502108/input.jpg
Mask after threshold example:
http://dl.dropbox.com/u/52502108/output.jpg
Thank You in advance.
Constantine B.
The reason of an appearing of that gray artifacts on saved images after applying of any kind of a threshold is... *.JPG standard saving format of Image<TColor, TDepth>! More precisely, a jpeg compression applied by it. So there was no problem with thresholding itself. Just a spoiled output image. It was confusing strongly though. The right way of saving such images (without artifacts) is, for example: Image<TColor,TDepth>.Bitmap.Save(your_path, ImageFormat.Bmp)

Categories

Resources