I am trying to find all differences between two images which are having different offset. I am able to align the images using ORB Detector but when I try to find differences using CvInvoke.AbsDiff all objects are shown as different even though only few are changed in second image.
Image alignment logic
string file2 = "E://Image1.png";
string file1 = "E://Image2.png";
var image = CvInvoke.Imread(file1, Emgu.CV.CvEnum.ImreadModes.Color);
CvInvoke.CvtColor(image, image, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);
var template = CvInvoke.Imread(file2, Emgu.CV.CvEnum.ImreadModes.Color);
CvInvoke.CvtColor(template, template, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);
Emgu.CV.Features2D.ORBDetector dt = new Emgu.CV.Features2D.ORBDetector(50000);
Emgu.CV.Util.VectorOfKeyPoint kpsA = new Emgu.CV.Util.VectorOfKeyPoint();
Emgu.CV.Util.VectorOfKeyPoint kpsB = new Emgu.CV.Util.VectorOfKeyPoint();
var descsA = new Mat();
var descsB = new Mat();
dt.DetectAndCompute(image, null, kpsA, descsA, false);
dt.DetectAndCompute(template, null, kpsB, descsB, false);
Emgu.CV.Features2D.DescriptorMatcher matcher = new Emgu.CV.Features2D.BFMatcher(Emgu.CV.Features2D.DistanceType.Hamming);
matcher.Add(descsA);
VectorOfVectorOfDMatch matches = new VectorOfVectorOfDMatch();
var mask = new Mat(matches.Size, 1, Emgu.CV.CvEnum.DepthType.Cv8U, 1);
mask.SetTo(new Bgr(Color.White).MCvScalar);
matcher.KnnMatch(descsB, matches, 2, mask);
Image<Bgr, byte> aligned = new Image<Bgr, byte>(image.Size);
Image<Bgr, byte> match = new Image<Bgr, byte>(template.Size);
Emgu.CV.Features2D.Features2DToolbox.VoteForUniqueness(matches, 0.80, mask);
PointF[] ptsA = new PointF[matches.Size];
PointF[] ptsB = new PointF[matches.Size];
MKeyPoint[] kpsAModel = kpsA.ToArray();
MKeyPoint[] kpsBModel = kpsB.ToArray();
for (int i = 0; i < matches.Size; i++)
{
ptsA[i] = kpsAModel[matches[i][0].TrainIdx].Point;
ptsB[i] = kpsBModel[matches[i][0].QueryIdx].Point;
}
Mat homography = CvInvoke.FindHomography(ptsA, ptsB, Emgu.CV.CvEnum.HomographyMethod.Ransac);
CvInvoke.WarpPerspective(image, aligned, homography, template.Size);
I have attached the images used. Is there any suggestion to resolve this?
Thanks.
I am using EmguCV and C#. I want to remove small connected objects from my image using ConnectedComponentsWithStats
I have following binary image as input
I am able to draw a rectangle for specified area. Now i want to remove objects from binary image
Here is my code
Image<Gray, byte> imgry = image.Convert<Gray, byte>();
var mask = imgry.InRange(new Gray(50), new Gray(255));
var label = new Mat();
var stats = new Mat();
var centroids = new Mat();
int labels = CvInvoke.ConnectedComponentsWithStats(mask, label, stats,
centroids,
LineType.EightConnected,
DepthType.Cv32S);
var img = stats.ToImage<Gray, int>();
Image<Gray, byte> imgout1 = new Image<Gray, byte>(image.Width, image.Height);
for (var i = 1; i < labels; i++)
{
var area = img[i, (int)ConnectecComponentsTypes.Area].Intensity;
var width = img[i, (int)ConnectecComponentsTypes.Width].Intensity;
var height = img[i, (int)ConnectecComponentsTypes.Height].Intensity;
var top = img[i, (int)ConnectecComponentsTypes.Top].Intensity;
var left = img[i, (int)ConnectecComponentsTypes.Left].Intensity;
var roi = new Rectangle((int)left, (int)top, (int)width, (int)height);
if (area > 1000)
{
CvInvoke.Rectangle(imgout1, roi, new MCvScalar(255), 1);
}
}
Drawn Rectangle for specified area
How can i remove objects of specified size
My output image should be like this
I have achieved one way by using contours it works fine for small image, when i have large image 10240*10240 and more number of particles my application entering to break mode
My application gets closed when it reaches at creating the recognizer object of EigenObjectRecognizer class without giving any error or warning , am I passing wrong parameters or there is some other problem ? Here is my code
string[] allFaces = Directory.GetFiles(savepath);
if (allFaces != null)
{
Image<Gray, Byte>[] trainingImages = new Image<Gray, Byte>[allFaces.Length];
string[] labels = new String[allFaces.Length];
for (int i = 0; i < allFaces.Length; i++)
{
trainingImages[i] = new Image<Gray, byte>(new Bitmap(allFaces[i]));
labels[i] = allFaces[i].Substring(allFaces[i].LastIndexOf("\\")+1);
}
MCvTermCriteria termCrit = new MCvTermCriteria(allFaces.Length, 0.001);
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(
trainingImages,
labels,
1000,
ref termCrit);
Image<Gray, Byte> testImage = new Image<Gray, Byte>(#"C:\..test\1");
string label = recognizer.Recognize(testImage).Label;
MessageBox.Show(label);
}
Solved the problem by getting an error from raw run of the compiled exe which gave an error of input of not same size from opencv instead of emgucv . When I looked at my training images they were of different size. Hope it helps others
I have an image of a "windows control" lets say a Text-box and I want to get background color of the text written within the text box by finding max color occurred in that picture by pixel color comparison.
I searched in google and I found that every one is talking about histogram and also some code is given to find out histogram of an image but no one described the procedure after finding histogram.
the code I found on some sites is like
// Create a grayscale image
Image<Gray, Byte> img = new Image<Gray, byte>(bmp);
// Fill image with random values
img.SetRandUniform(new MCvScalar(), new MCvScalar(255));
// Create and initialize histogram
DenseHistogram hist = new DenseHistogram(256, new RangeF(0.0f, 255.0f));
// Histogram Computing
hist.Calculate<Byte>(new Image<Gray, byte>[] { img }, true, null);
Currently I have used the code which takes a line segment from the image and finds the max color but which is not the right way to do it.
the currently used code is as follows
Image<Bgr, byte> image = new Image<Bgr, byte>(temp);
int height = temp.Height / 2;
Dictionary<Bgr, int> colors = new Dictionary<Bgr, int>();
for (int i = 0; i < (image.Width); i++)
{
Bgr pixel = new Bgr();
pixel = image[height, i];
if (colors.ContainsKey(pixel))
colors[pixel] += 1;
else
colors.Add(pixel, 1);
}
Bgr result = colors.FirstOrDefault(x => x.Value == colors.Values.Max()).Key;
please help me if any one knows how to get it. Take this image as input ==>
Emgu.CV's DenseHistogram exposes the method MinMax() which finds the maximum and minimum bin of the histogram.
So after computing your histogram like in your first code snippet:
// Create a grayscale image
Image<Gray, Byte> img = new Image<Gray, byte>(bmp);
// Fill image with random values
img.SetRandUniform(new MCvScalar(), new MCvScalar(255));
// Create and initialize histogram
DenseHistogram hist = new DenseHistogram(256, new RangeF(0.0f, 255.0f));
// Histogram Computing
hist.Calculate<Byte>(new Image<Gray, byte>[] { img }, true, null);
...find the peak of the histogram with this method:
float minValue, maxValue;
Point[] minLocation;
Point[] maxLocation;
hist.MinMax(out minValue, out maxValue, out minLocation, out maxLocation);
// This is the value you are looking for (the bin representing the highest peak in your
// histogram is the also the main color of your image).
var mainColor = maxLocation[0].Y;
I found a code snippet in stackoverflow which does my work.
code goes like this
int BlueHist;
int GreenHist;
int RedHist;
Image<Bgr, Byte> img = new Image<Bgr, byte>(bmp);
DenseHistogram Histo = new DenseHistogram(255, new RangeF(0, 255));
Image<Gray, Byte> img2Blue = img[0];
Image<Gray, Byte> img2Green = img[1];
Image<Gray, Byte> img2Red = img[2];
Histo.Calculate(new Image<Gray, Byte>[] { img2Blue }, true, null);
double[] minV, maxV;
Point[] minL, maxL;
Histo.MinMax(out minV, out maxV, out minL, out maxL);
BlueHist = maxL[0].Y;
Histo.Clear();
Histo.Calculate(new Image<Gray, Byte>[] { img2Green }, true, null);
Histo.MinMax(out minV, out maxV, out minL, out maxL);
GreenHist = maxL[0].Y;
Histo.Clear();
Histo.Calculate(new Image<Gray, Byte>[] { img2Red }, true, null);
Histo.MinMax(out minV, out maxV, out minL, out maxL);
RedHist = maxL[0].Y;
I am doing a project on panoramic stitching of Images using Emgu CV (Open CV for C#). Till now I have done some work that stitches images but the output is kinda weird. This is what I am getting:
My panorama:
This is what the Emgu CV Stitcher.stitch method gives:
Stiched by inbuilt stitcher
Clearly I am missing something. Moreover if I add more images, the output gets more stretchy like this one:
I am not able to figure out what am i missing. Here is my code till now:
http://pastebin.com/Ke2Zz4m9
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using Emgu.CV;
using Emgu.CV.CvEnum;
using Emgu.CV.Features2D;
using Emgu.CV.Structure;
using Emgu.CV.UI;
using Emgu.CV.Util;
using Emgu.CV.GPU;
namespace Project
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
Image<Bgr, float> one = new Image<Bgr, float>("D:\\Venice_panorama_part_01.jpg");
Image<Bgr, float> two = new Image<Bgr, float>("D:\\Venice_panorama_part_02.jpg");
Image<Bgr, float> third = new Image<Bgr, float>("D:\\Venice_panorama_part_03.jpg");
Image<Bgr, float> fourth = new Image<Bgr, float>("D:\\Venice_panorama_part_04.jpg");
Image<Bgr, float> fifth = new Image<Bgr, float>("D:\\Venice_panorama_part_05.jpg");
Image<Bgr, float> sixth = new Image<Bgr, float>("D:\\Venice_panorama_part_06.jpg");
Image<Bgr, float> seventh = new Image<Bgr, float>("D:\\Venice_panorama_part_07.jpg");
Image<Bgr, float> eighth = new Image<Bgr, float>("D:\\Venice_panorama_part_08.jpg");
Image<Bgr, Byte> result = FindMatch(two, third);
result = convert(result);
Image<Bgr, float> twoPlusThree = result.Convert<Bgr, float>();
Image<Bgr, Byte> result2 = FindMatch(fourth, fifth);
result2 = convert(result2);
Image<Bgr, float> fourPlusFive = result2.Convert<Bgr, float>();
Image<Bgr, Byte> result3 = FindMatch(sixth, seventh);
result3 = convert(result3);
Image<Bgr, float> sixPlusSeven = result3.Convert<Bgr, float>();
Image<Bgr, Byte> result4 = FindMatch(one, twoPlusThree);
result4 = convert(result4);
Image<Bgr, float> oneTwoThree = result4.Convert<Bgr, float>();
Image<Bgr, Byte> result5 = FindMatch(oneTwoThree, fourPlusFive);
result5 = convert(result5);
Image<Bgr, float> oneTwoThreeFourFive = result5.Convert<Bgr, float>();
Image<Bgr, Byte> result6 = FindMatch(sixPlusSeven, eighth);
result6 = convert(result6);
Image<Bgr, float> sixSevenEigth = result6.Convert<Bgr, float>();
Image<Bgr, Byte> result7 = FindMatch(oneTwoThreeFourFive, sixSevenEigth);
result7 = convert(result7);
result.Save("D:\\result1.jpg");
result2.Save("D:\\result2.jpg");
result3.Save("D:\\result3.jpg");
result4.Save("D:\\result4.jpg");
result5.Save("D:\\result5.jpg");
result6.Save("D:\\result6.jpg");
result7.Save("D:\\result7.jpg");
this.Close();
}
public static Image<Bgr, Byte> FindMatch(Image<Bgr, float> fImage, Image<Bgr, float> lImage)
{
HomographyMatrix homography = null;
SURFDetector surfCPU = new SURFDetector(500, false);
int k = 2;
double uniquenessThreshold = 0.8;
Matrix<int> indices;
Matrix<byte> mask;
VectorOfKeyPoint modelKeyPoints;
VectorOfKeyPoint observedKeyPoints;
Image<Gray, Byte> fImageG = fImage.Convert<Gray, Byte>();
Image<Gray, Byte> lImageG = lImage.Convert<Gray, Byte>();
if (GpuInvoke.HasCuda)
{
GpuSURFDetector surfGPU = new GpuSURFDetector(surfCPU.SURFParams, 0.01f);
using (GpuImage<Gray, Byte> gpuModelImage = new GpuImage<Gray, byte>(fImageG))
//extract features from the object image
using (GpuMat<float> gpuModelKeyPoints = surfGPU.DetectKeyPointsRaw(gpuModelImage, null))
using (GpuMat<float> gpuModelDescriptors = surfGPU.ComputeDescriptorsRaw(gpuModelImage, null, gpuModelKeyPoints))
using (GpuBruteForceMatcher<float> matcher = new GpuBruteForceMatcher<float>(DistanceType.L2))
{
modelKeyPoints = new VectorOfKeyPoint();
surfGPU.DownloadKeypoints(gpuModelKeyPoints, modelKeyPoints);
// extract features from the observed image
using (GpuImage<Gray, Byte> gpuObservedImage = new GpuImage<Gray, byte>(lImageG))
using (GpuMat<float> gpuObservedKeyPoints = surfGPU.DetectKeyPointsRaw(gpuObservedImage, null))
using (GpuMat<float> gpuObservedDescriptors = surfGPU.ComputeDescriptorsRaw(gpuObservedImage, null, gpuObservedKeyPoints))
using (GpuMat<int> gpuMatchIndices = new GpuMat<int>(gpuObservedDescriptors.Size.Height, k, 1, true))
using (GpuMat<float> gpuMatchDist = new GpuMat<float>(gpuObservedDescriptors.Size.Height, k, 1, true))
using (GpuMat<Byte> gpuMask = new GpuMat<byte>(gpuMatchIndices.Size.Height, 1, 1))
using (Stream stream = new Stream())
{
matcher.KnnMatchSingle(gpuObservedDescriptors, gpuModelDescriptors, gpuMatchIndices, gpuMatchDist, k, null, stream);
indices = new Matrix<int>(gpuMatchIndices.Size);
mask = new Matrix<byte>(gpuMask.Size);
//gpu implementation of voteForUniquess
using (GpuMat<float> col0 = gpuMatchDist.Col(0))
using (GpuMat<float> col1 = gpuMatchDist.Col(1))
{
GpuInvoke.Multiply(col1, new MCvScalar(uniquenessThreshold), col1, stream);
GpuInvoke.Compare(col0, col1, gpuMask, CMP_TYPE.CV_CMP_LE, stream);
}
observedKeyPoints = new VectorOfKeyPoint();
surfGPU.DownloadKeypoints(gpuObservedKeyPoints, observedKeyPoints);
//wait for the stream to complete its tasks
//We can perform some other CPU intesive stuffs here while we are waiting for the stream to complete.
stream.WaitForCompletion();
gpuMask.Download(mask);
gpuMatchIndices.Download(indices);
if (GpuInvoke.CountNonZero(gpuMask) >= 4)
{
int nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 2);
}
}
}
}
else
{
//extract features from the object image
modelKeyPoints = new VectorOfKeyPoint();
Matrix<float> modelDescriptors = surfCPU.DetectAndCompute(fImageG, null, modelKeyPoints);
// extract features from the observed image
observedKeyPoints = new VectorOfKeyPoint();
Matrix<float> observedDescriptors = surfCPU.DetectAndCompute(lImageG, null, observedKeyPoints);
BruteForceMatcher<float> matcher = new BruteForceMatcher<float>(DistanceType.L2);
matcher.Add(modelDescriptors);
indices = new Matrix<int>(observedDescriptors.Rows, k);
using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
{
matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
mask = new Matrix<byte>(dist.Rows, 1);
mask.SetValue(255);
Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);
}
int nonZeroCount = CvInvoke.cvCountNonZero(mask);
if (nonZeroCount >= 4)
{
nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 2);
}
}
Image<Bgr, Byte> mImage = fImage.Convert<Bgr, Byte>();
Image<Bgr, Byte> oImage = lImage.Convert<Bgr, Byte>();
Image<Bgr, Byte> result = new Image<Bgr, byte>(mImage.Width + oImage.Width, mImage.Height);
if (homography != null)
{ //draw a rectangle along the projected model
Rectangle rect = fImage.ROI;
PointF[] pts = new PointF[] {
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)};
homography.ProjectPoints(pts);
HomographyMatrix origin = new HomographyMatrix(); //I perform a copy of the left image with a not real shift operation on the origin
origin.SetIdentity();
origin.Data[0, 2] = 0;
origin.Data[1, 2] = 0;
Image<Bgr, Byte> mosaic = new Image<Bgr, byte>(mImage.Width + oImage.Width + 2000, mImage.Height*2);
Image<Bgr, byte> warp_image = mosaic.Clone();
mosaic = mImage.WarpPerspective(origin, mosaic.Width, mosaic.Height, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR, Emgu.CV.CvEnum.WARP.CV_WARP_DEFAULT, new Bgr(0, 0, 0));
warp_image = oImage.WarpPerspective(homography, warp_image.Width, warp_image.Height, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR, Emgu.CV.CvEnum.WARP.CV_WARP_INVERSE_MAP, new Bgr(200, 0, 0));
Image<Gray, byte> warp_image_mask = oImage.Convert<Gray, byte>();
warp_image_mask.SetValue(new Gray(255));
Image<Gray, byte> warp_mosaic_mask = mosaic.Convert<Gray, byte>();
warp_mosaic_mask.SetZero();
warp_mosaic_mask = warp_image_mask.WarpPerspective(homography, warp_mosaic_mask.Width, warp_mosaic_mask.Height, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR, Emgu.CV.CvEnum.WARP.CV_WARP_INVERSE_MAP, new Gray(0));
warp_image.Copy(mosaic, warp_mosaic_mask);
return mosaic;
}
return null;
}
private Image<Bgr, Byte> convert(Image<Bgr, Byte> img)
{
Image<Gray, byte> imgGray = img.Convert<Gray, byte>();
Image<Gray, byte> mask = imgGray.CopyBlank();
Contour<Point> largestContour = null;
double largestarea = 0;
for (var contours = imgGray.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_EXTERNAL); contours != null; contours = contours.HNext)
{
if (contours.Area > largestarea)
{
largestarea = contours.Area;
largestContour = contours;
}
}
CvInvoke.cvSetImageROI(img, largestContour.BoundingRectangle);
return img;
}
}
}
Actually there is nothing wrong with your code, and this image is totally correct. Please notice when you stitch all the images together, you are taking the first(left) image as a reference plane and set it as the front direction, all the subsequent images that are originally oriented to the right direction were projected to a plane on the front. Think of you are sitting inside a room, the wall in front of you appears rectangular while the one on your right side may look trapezoidal. This is because of the so-called "perspective distortion"/homography, and the larger the horizontal angle of view, the more noticeable this phenomenon.
So if one intend to stitch a series of images that covers a wide angle of view, he typically tries the cylindrical or spherical surface instead of a planar surface. You may find this option by searching the reference manual.