Emgu CV - Issue getting differences for misaligned images - c#

I am trying to find all differences between two images which are having different offset. I am able to align the images using ORB Detector but when I try to find differences using CvInvoke.AbsDiff all objects are shown as different even though only few are changed in second image.
Image alignment logic
string file2 = "E://Image1.png";
string file1 = "E://Image2.png";
var image = CvInvoke.Imread(file1, Emgu.CV.CvEnum.ImreadModes.Color);
CvInvoke.CvtColor(image, image, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);
var template = CvInvoke.Imread(file2, Emgu.CV.CvEnum.ImreadModes.Color);
CvInvoke.CvtColor(template, template, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);
Emgu.CV.Features2D.ORBDetector dt = new Emgu.CV.Features2D.ORBDetector(50000);
Emgu.CV.Util.VectorOfKeyPoint kpsA = new Emgu.CV.Util.VectorOfKeyPoint();
Emgu.CV.Util.VectorOfKeyPoint kpsB = new Emgu.CV.Util.VectorOfKeyPoint();
var descsA = new Mat();
var descsB = new Mat();
dt.DetectAndCompute(image, null, kpsA, descsA, false);
dt.DetectAndCompute(template, null, kpsB, descsB, false);
Emgu.CV.Features2D.DescriptorMatcher matcher = new Emgu.CV.Features2D.BFMatcher(Emgu.CV.Features2D.DistanceType.Hamming);
matcher.Add(descsA);
VectorOfVectorOfDMatch matches = new VectorOfVectorOfDMatch();
var mask = new Mat(matches.Size, 1, Emgu.CV.CvEnum.DepthType.Cv8U, 1);
mask.SetTo(new Bgr(Color.White).MCvScalar);
matcher.KnnMatch(descsB, matches, 2, mask);
Image<Bgr, byte> aligned = new Image<Bgr, byte>(image.Size);
Image<Bgr, byte> match = new Image<Bgr, byte>(template.Size);
Emgu.CV.Features2D.Features2DToolbox.VoteForUniqueness(matches, 0.80, mask);
PointF[] ptsA = new PointF[matches.Size];
PointF[] ptsB = new PointF[matches.Size];
MKeyPoint[] kpsAModel = kpsA.ToArray();
MKeyPoint[] kpsBModel = kpsB.ToArray();
for (int i = 0; i < matches.Size; i++)
{
ptsA[i] = kpsAModel[matches[i][0].TrainIdx].Point;
ptsB[i] = kpsBModel[matches[i][0].QueryIdx].Point;
}
Mat homography = CvInvoke.FindHomography(ptsA, ptsB, Emgu.CV.CvEnum.HomographyMethod.Ransac);
CvInvoke.WarpPerspective(image, aligned, homography, template.Size);
I have attached the images used. Is there any suggestion to resolve this?
Thanks.

Related

How to get rid of unnecessary lines with emgu cv

I'm trying to detect contour of an ellipse-like water droplet with Emgu CV. I wrote code for contour detection:
public List<int> GetDiameters()
{
string inputFile = #"path.jpg";
Image<Bgr, byte> imageInput = new Image<Bgr, byte>(inputFile);
Image<Gray, byte> grayImage = imageInput.Convert<Gray, byte>();
Image<Gray, byte> bluredImage = grayImage;
CvInvoke.MedianBlur(grayImage, bluredImage, 9);
Image<Gray, byte> edgedImage = bluredImage;
CvInvoke.Canny(bluredImage, edgedImage, 50, 5);
Image<Gray, byte> closedImage = edgedImage;
Mat kernel = CvInvoke.GetStructuringElement(Emgu.CV.CvEnum.ElementShape.Ellipse, new System.Drawing.Size { Height = 100, Width = 250}, new System.Drawing.Point(-1, -1));
CvInvoke.MorphologyEx(edgedImage, closedImage, Emgu.CV.CvEnum.MorphOp.Close, kernel, new System.Drawing.Point(-1, -1), 0, Emgu.CV.CvEnum.BorderType.Replicate, new MCvScalar());
System.Drawing.Point(100, 250), 10000, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar()
Image<Gray, byte> contoursImage = closedImage;
Image<Bgr, byte> imageOut = imageInput;
VectorOfVectorOfPoint rescontours1 = new VectorOfVectorOfPoint();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(contoursImage, contours, null, Emgu.CV.CvEnum.RetrType.List,
Emgu.CV.CvEnum.ChainApproxMethod.LinkRuns);
MCvScalar color = new MCvScalar(0, 0, 255);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour,
0.01 * CvInvoke.ArcLength(contour, true), true);
var area = CvInvoke.ContourArea(contour);
if (area > 0 && approxContour.Size > 10)
{
rescontours1.Push(approxContour);
}
CvInvoke.DrawContours(imageOut, rescontours1, -1, color, 2);
}
}
}
}
result so far:
I think there is a problem with approximation. How to get rid of internal lines and close external contour?
I might need some more information to exactly pinpoint your issue, but it might be something to do with your median blur. I would see if you are blurring enough that EmguCV things the blur is enough that you can canny edge detection. Another method that you could use is Dilate. Try Dialating your Canny edge detection and see if you get any better results.
EDIT
Here is the code below
public List<int> GetDiameters()
{
//List to hold output diameters
List<int> diametors = new List<int>();
//File path to where the image is located
string inputFile = #"C:\Users\jones\Desktop\Image Folder\water.JPG";
//Read in the image and store it as a mat object
Mat img = CvInvoke.Imread(inputFile, Emgu.CV.CvEnum.ImreadModes.AnyColor);
//Mat object that will hold the output of the gaussian blur
Mat gaussianBlur = new Mat();
//Blur the image
CvInvoke.GaussianBlur(img, gaussianBlur, new System.Drawing.Size(21, 21), 20, 20, Emgu.CV.CvEnum.BorderType.Default);
//Mat object that will hold the output of the canny
Mat canny = new Mat();
//Canny the image
CvInvoke.Canny(gaussianBlur, canny, 40, 40);
//Mat object that will hold the output of the dilate
Mat dilate = new Mat();
//Dilate the canny image
CvInvoke.Dilate(canny, dilate, null, new System.Drawing.Point(-1, -1), 6, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar(0, 0, 0));
//Vector that will hold all found contours
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
//Find the contours and draw them on the image
CvInvoke.FindContours(dilate, contours, null, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(img, contours, -1, new MCvScalar(255, 0, 0), 5, Emgu.CV.CvEnum.LineType.FourConnected);
//Variables to hold relevent info on what is the biggest contour
int biggest = 0;
int index = 0;
//Find the biggest contour
for (int i = 0; i < contours.Size; i++)
{
if (contours.Size > biggest)
{
biggest = contours.Size;
index = i;
}
}
//Once all contours have been looped over, add the biggest contour's index to the list
diametors.Add(index);
//Return the list
return diametors;
}
The first thing you do is blur the image.
Then you canny the image.
Then you dilate the image, as to make the final output contours more uniform.
Then you just find contours.
I know the final contours are a little bigger than the water droplet, but this is the best that I could come up with. You can probably fiddle around with some of the settings and the code above to make the result a little cleaner.

CvInvoke.MinAreaRect(contour) returns the wrong angle

I have a contour of a number plate and I want to check if it's tilted or not. I used CvInvoke.MinAreaRect(contour) but it always returns the angle == -90 even when the plate is obviously tilted, you can see the contour I draw in the picture below.
Does anyone know what happened and solution for my problem?
Here is the code:
Image<Gray, byte> gray = new Image<Gray, byte>("2.PNG");
Image<Gray, byte> adaptive_threshold_img = gray.ThresholdAdaptive(new Gray(255), AdaptiveThresholdType.GaussianC, ThresholdType.BinaryInv, 11, new Gray(2));
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
Mat hier = new Mat();
CvInvoke.FindContours(adaptive_threshold_img, contours, hier, RetrType.Tree, ChainApproxMethod.ChainApproxSimple);
double max_area = 0;
VectorOfPoint max_contour = new VectorOfPoint();
for (int i = 0; i < contours.Size; i++)
{
double temp = CvInvoke.ContourArea(contours[i]);
if (temp > max_area)
{
max_area = temp;
max_contour = contours[i];
}
}
VectorOfVectorOfPoint contour_to_draw = new VectorOfVectorOfPoint(max_contour);
CvInvoke.DrawContours(gray, contour_to_draw, 0, new MCvScalar(255), 2);
CvInvoke.Imshow("plate", gray);
RotatedRect plate_feature = CvInvoke.MinAreaRect(max_contour);
CvInvoke.WaitKey();
CvInvoke.DestroyAllWindows();
Try CvInvoke.threshold() instead of gray.ThresholdAdaptive(). Set proper threshold and you'll get better contour than before.

How to remove connected objects in binary image using ConnectedComponents

I am using EmguCV and C#. I want to remove small connected objects from my image using ConnectedComponentsWithStats
I have following binary image as input
I am able to draw a rectangle for specified area. Now i want to remove objects from binary image
Here is my code
Image<Gray, byte> imgry = image.Convert<Gray, byte>();
var mask = imgry.InRange(new Gray(50), new Gray(255));
var label = new Mat();
var stats = new Mat();
var centroids = new Mat();
int labels = CvInvoke.ConnectedComponentsWithStats(mask, label, stats,
centroids,
LineType.EightConnected,
DepthType.Cv32S);
var img = stats.ToImage<Gray, int>();
Image<Gray, byte> imgout1 = new Image<Gray, byte>(image.Width, image.Height);
for (var i = 1; i < labels; i++)
{
var area = img[i, (int)ConnectecComponentsTypes.Area].Intensity;
var width = img[i, (int)ConnectecComponentsTypes.Width].Intensity;
var height = img[i, (int)ConnectecComponentsTypes.Height].Intensity;
var top = img[i, (int)ConnectecComponentsTypes.Top].Intensity;
var left = img[i, (int)ConnectecComponentsTypes.Left].Intensity;
var roi = new Rectangle((int)left, (int)top, (int)width, (int)height);
if (area > 1000)
{
CvInvoke.Rectangle(imgout1, roi, new MCvScalar(255), 1);
}
}
Drawn Rectangle for specified area
How can i remove objects of specified size
My output image should be like this
I have achieved one way by using contours it works fine for small image, when i have large image 10240*10240 and more number of particles my application entering to break mode

Face Recognizer using EigenObjectRecognizer

Image<Gray, Byte>[] trainingImages = new Image<Gray,Byte>[5];
trainingImages[0] = new Image<Gray, byte>("MyPic.jpg");
String[] labels = new String[] { "mine"}
MCvTermCriteria termCrit = new MCvTermCriteria(1, 0.001);
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(
trainingImages,
labels,
1000,
ref termCrit);
Image<Gray,Byte> testImage = new Image<Gray,Byte>("sample_photo.jpg");
var result= recognizer.Recognize(testImage);
result.label always returns string "mine"(label of the training image) for every face it detects.
result.label must be the returned when the detected faces are same in the two images, instead it returns same label for every face.
What is the problem with my code.

EmguCV SURF with cam?

i am new on EMGU CV. I would like to SURF detect more than one patterns with using cam. Like this video. But now, i try to develop this just one pattern for starting point.
I examined EMGUCV's SURF example. When i try to implement this codes to cam capture's example, error turns on run time. I searched more but did not find any code example.
So, do you suggest me a code snippet or tutorial which is explained good.
Thank very much already now.
Codes are below which i am working on;
...........................................
FrameRaw = capture.QueryFrame();
CamImageBox.Image = FrameRaw;
Run(FrameRaw);
...........................................
private void Run(Image<Bgr, byte> TempImage)
{
Image<Gray, Byte> modelImage = new Image<Gray, byte>("sample.jpg");
Image<Gray, Byte> observedImage = TempImage.Convert<Gray, Byte>();
// Image<Gray, Byte> observedImage = new Image<Gray,byte>("box_in_scene.png");
Stopwatch watch;
HomographyMatrix homography = null;
SURFDetector surfCPU = new SURFDetector(500, false);
VectorOfKeyPoint modelKeyPoints;
VectorOfKeyPoint observedKeyPoints;
Matrix<int> indices;
Matrix<float> dist;
Matrix<byte> mask;
if (GpuInvoke.HasCuda)
{
GpuSURFDetector surfGPU = new GpuSURFDetector(surfCPU.SURFParams, 0.01f);
using (GpuImage<Gray, Byte> gpuModelImage = new GpuImage<Gray, byte>(modelImage))
//extract features from the object image
using (GpuMat<float> gpuModelKeyPoints = surfGPU.DetectKeyPointsRaw(gpuModelImage, null))
using (GpuMat<float> gpuModelDescriptors = surfGPU.ComputeDescriptorsRaw(gpuModelImage, null, gpuModelKeyPoints))
using (GpuBruteForceMatcher matcher = new GpuBruteForceMatcher(GpuBruteForceMatcher.DistanceType.L2))
{
modelKeyPoints = new VectorOfKeyPoint();
surfGPU.DownloadKeypoints(gpuModelKeyPoints, modelKeyPoints);
watch = Stopwatch.StartNew();
// extract features from the observed image
using (GpuImage<Gray, Byte> gpuObservedImage = new GpuImage<Gray, byte>(observedImage))
using (GpuMat<float> gpuObservedKeyPoints = surfGPU.DetectKeyPointsRaw(gpuObservedImage, null))
using (GpuMat<float> gpuObservedDescriptors = surfGPU.ComputeDescriptorsRaw(gpuObservedImage, null, gpuObservedKeyPoints))
using (GpuMat<int> gpuMatchIndices = new GpuMat<int>(gpuObservedDescriptors.Size.Height, 2, 1))
using (GpuMat<float> gpuMatchDist = new GpuMat<float>(gpuMatchIndices.Size, 1))
{
observedKeyPoints = new VectorOfKeyPoint();
surfGPU.DownloadKeypoints(gpuObservedKeyPoints, observedKeyPoints);
matcher.KnnMatch(gpuObservedDescriptors, gpuModelDescriptors, gpuMatchIndices, gpuMatchDist, 2, null);
indices = new Matrix<int>(gpuMatchIndices.Size);
dist = new Matrix<float>(indices.Size);
gpuMatchIndices.Download(indices);
gpuMatchDist.Download(dist);
mask = new Matrix<byte>(dist.Rows, 1);
mask.SetValue(255);
Features2DTracker.VoteForUniqueness(dist, 0.8, mask);
int nonZeroCount = CvInvoke.cvCountNonZero(mask);
if (nonZeroCount >= 4)
{
nonZeroCount = Features2DTracker.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 3);
}
watch.Stop();
}
}
}
else
{
//extract features from the object image
modelKeyPoints = surfCPU.DetectKeyPointsRaw(modelImage, null);
//MKeyPoint[] kpts = modelKeyPoints.ToArray();
Matrix<float> modelDescriptors = surfCPU.ComputeDescriptorsRaw(modelImage, null, modelKeyPoints);
watch = Stopwatch.StartNew();
// extract features from the observed image
observedKeyPoints = surfCPU.DetectKeyPointsRaw(observedImage, null);
Matrix<float> observedDescriptors = surfCPU.ComputeDescriptorsRaw(observedImage, null, observedKeyPoints);
BruteForceMatcher matcher = new BruteForceMatcher(BruteForceMatcher.DistanceType.L2F32);
matcher.Add(modelDescriptors);
int k = 2;
indices = new Matrix<int>(observedDescriptors.Rows, k);
dist = new Matrix<float>(observedDescriptors.Rows, k);
matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
mask = new Matrix<byte>(dist.Rows, 1);
mask.SetValue(255);
Features2DTracker.VoteForUniqueness(dist, 0.8, mask);
int nonZeroCount = CvInvoke.cvCountNonZero(mask);
if (nonZeroCount >= 4)
{
nonZeroCount = Features2DTracker.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DTracker.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 3);
}
watch.Stop();
}
//Draw the matched keypoints
Image<Bgr, Byte> result = Features2DTracker.DrawMatches(modelImage, modelKeyPoints, observedImage, observedKeyPoints,
indices, new Bgr(255, 255, 255), new Bgr(255, 255, 255), mask, Features2DTracker.KeypointDrawType.NOT_DRAW_SINGLE_POINTS);
#region draw the projected region on the image
if (homography != null)
{ //draw a rectangle along the projected model
Rectangle rect = modelImage.ROI;
PointF[] pts = new PointF[] {
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)};
homography.ProjectPoints(pts);
result.DrawPolyline(Array.ConvertAll<PointF, Point>(pts, Point.Round), true, new Bgr(Color.Red), 5);
}
#endregion
// ImageViewer.Show(result, String.Format("Matched using {0} in {1} milliseconds", GpuInvoke.HasCuda ? "GPU" : "CPU", watch.ElapsedMilliseconds));
}
I found the SURF tutorial you used, but I don't see why it should cause an error. Have you been able to execute the tutorial code by itself, without the GPU acceleration complication?
Moreover, what error occurred?

Categories

Resources