Related
I'm trying to detect contour of an ellipse-like water droplet with Emgu CV. I wrote code for contour detection:
public List<int> GetDiameters()
{
string inputFile = #"path.jpg";
Image<Bgr, byte> imageInput = new Image<Bgr, byte>(inputFile);
Image<Gray, byte> grayImage = imageInput.Convert<Gray, byte>();
Image<Gray, byte> bluredImage = grayImage;
CvInvoke.MedianBlur(grayImage, bluredImage, 9);
Image<Gray, byte> edgedImage = bluredImage;
CvInvoke.Canny(bluredImage, edgedImage, 50, 5);
Image<Gray, byte> closedImage = edgedImage;
Mat kernel = CvInvoke.GetStructuringElement(Emgu.CV.CvEnum.ElementShape.Ellipse, new System.Drawing.Size { Height = 100, Width = 250}, new System.Drawing.Point(-1, -1));
CvInvoke.MorphologyEx(edgedImage, closedImage, Emgu.CV.CvEnum.MorphOp.Close, kernel, new System.Drawing.Point(-1, -1), 0, Emgu.CV.CvEnum.BorderType.Replicate, new MCvScalar());
System.Drawing.Point(100, 250), 10000, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar()
Image<Gray, byte> contoursImage = closedImage;
Image<Bgr, byte> imageOut = imageInput;
VectorOfVectorOfPoint rescontours1 = new VectorOfVectorOfPoint();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(contoursImage, contours, null, Emgu.CV.CvEnum.RetrType.List,
Emgu.CV.CvEnum.ChainApproxMethod.LinkRuns);
MCvScalar color = new MCvScalar(0, 0, 255);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour,
0.01 * CvInvoke.ArcLength(contour, true), true);
var area = CvInvoke.ContourArea(contour);
if (area > 0 && approxContour.Size > 10)
{
rescontours1.Push(approxContour);
}
CvInvoke.DrawContours(imageOut, rescontours1, -1, color, 2);
}
}
}
}
result so far:
I think there is a problem with approximation. How to get rid of internal lines and close external contour?
I might need some more information to exactly pinpoint your issue, but it might be something to do with your median blur. I would see if you are blurring enough that EmguCV things the blur is enough that you can canny edge detection. Another method that you could use is Dilate. Try Dialating your Canny edge detection and see if you get any better results.
EDIT
Here is the code below
public List<int> GetDiameters()
{
//List to hold output diameters
List<int> diametors = new List<int>();
//File path to where the image is located
string inputFile = #"C:\Users\jones\Desktop\Image Folder\water.JPG";
//Read in the image and store it as a mat object
Mat img = CvInvoke.Imread(inputFile, Emgu.CV.CvEnum.ImreadModes.AnyColor);
//Mat object that will hold the output of the gaussian blur
Mat gaussianBlur = new Mat();
//Blur the image
CvInvoke.GaussianBlur(img, gaussianBlur, new System.Drawing.Size(21, 21), 20, 20, Emgu.CV.CvEnum.BorderType.Default);
//Mat object that will hold the output of the canny
Mat canny = new Mat();
//Canny the image
CvInvoke.Canny(gaussianBlur, canny, 40, 40);
//Mat object that will hold the output of the dilate
Mat dilate = new Mat();
//Dilate the canny image
CvInvoke.Dilate(canny, dilate, null, new System.Drawing.Point(-1, -1), 6, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar(0, 0, 0));
//Vector that will hold all found contours
VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint();
//Find the contours and draw them on the image
CvInvoke.FindContours(dilate, contours, null, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
CvInvoke.DrawContours(img, contours, -1, new MCvScalar(255, 0, 0), 5, Emgu.CV.CvEnum.LineType.FourConnected);
//Variables to hold relevent info on what is the biggest contour
int biggest = 0;
int index = 0;
//Find the biggest contour
for (int i = 0; i < contours.Size; i++)
{
if (contours.Size > biggest)
{
biggest = contours.Size;
index = i;
}
}
//Once all contours have been looped over, add the biggest contour's index to the list
diametors.Add(index);
//Return the list
return diametors;
}
The first thing you do is blur the image.
Then you canny the image.
Then you dilate the image, as to make the final output contours more uniform.
Then you just find contours.
I know the final contours are a little bigger than the water droplet, but this is the best that I could come up with. You can probably fiddle around with some of the settings and the code above to make the result a little cleaner.
INPUT IMAGE
Hi I am try to learn EmguCV 3.3 and I have a question about blob counting.As you see in INPUT IMAGE I have black uneven blobs.
I am try to do something like this.
OUTPUT IMAGE
I need to draw rectangle around blobs and count them.
I tryied some approches but non of it work.
I need Help();
You can use FindCountours() or SimpleBlobDetection() to achieve that, here is an example code uses the first one:
Image<Gray, Byte> grayImage = new Image<Gray,Byte>(mRGrc.jpg);
Image<Gray, Byte> canny = new Image<Gray, byte>(grayImage.Size);
int counter = 0;
using (MemStorage storage = new MemStorage())
for (Contour<Point> contours = grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage);contours != null; contours = contours.HNext)
{
contours.ApproxPoly(contours.Perimeter * 0.05, storage);
CvInvoke.cvDrawContours(canny, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
counter++;
}
using (MemStorage store = new MemStorage())
for (Contour<Point> contours1= grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours1 != null; contours1 = contours1.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours1, 1);
canny.Draw(r, new Gray(255), 1);
}
Console.Writeline("Number of blobs: " + counter);
I am creating an Attendance System using 4 cameras for facial recognition. I am using Emgu CV 3.0 in C#. Now, in my attendance form, which consist of 4 imagebox, the application suddenly stops and it goes back to the main form and shows an error to the button which reference the attendance form. The error was:
Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
Here is the code where the error occured:
private void btn_attendance_Click(object sender, EventArgs e)
{
attendance attendance = new attendance();
attendance.ShowDialog();
}
Here is the code for the Attendance form without the recognition part:
public partial class attendance : Form
{
private Capture cam1, cam2, cam3, cam4;
private CascadeClassifier _cascadeClassifier;
private RecognizerEngine _recognizerEngine;
private String _trainerDataPath = "\\traineddata_v2";
private readonly String dbpath = "Server=localhost;Database=faculty_attendance_system;Uid=root;Pwd=root;";
MySqlConnection conn;
public attendance()
{
InitializeComponent();
conn = new MySqlConnection("Server=localhost;Database=faculty_attendance_system;Uid=root;Pwd=root;");
}
private void btn_home_Click(object sender, EventArgs e)
{
this.Close();
}
private void attendance_Load(object sender, EventArgs e)
{
time_now.Start();
lbl_date.Text = DateTime.Now.ToString("");
_recognizerEngine = new RecognizerEngine(dbpath, _trainerDataPath);
_cascadeClassifier = new CascadeClassifier(Application.StartupPath + "/haarcascade_frontalface_default.xml");
cam1 = new Capture(0);
cam2 = new Capture(1);
cam3 = new Capture(3);
cam4 = new Capture(4);
Application.Idle += new EventHandler(ProcessFrame);
}
private void ProcessFrame(Object sender, EventArgs args)
{
Image<Bgr, byte> nextFrame_cam1 = cam1.QueryFrame().ToImage<Bgr, Byte>();
Image<Bgr, byte> nextFrame_cam2 = cam2.QueryFrame().ToImage<Bgr, Byte>();
Image<Bgr, byte> nextFrame_cam3 = cam3.QueryFrame().ToImage<Bgr, Byte>();
Image<Bgr, byte> nextFrame_cam4 = cam4.QueryFrame().ToImage<Bgr, Byte>();
using (nextFrame_cam1)
{
if (nextFrame_cam1 != null)
{
Image<Gray, byte> grayframe = nextFrame_cam1.Convert<Gray, byte>();
var faces = _cascadeClassifier.DetectMultiScale(grayframe, 1.5, 10, Size.Empty, Size.Empty);
foreach (var face in faces)
{
nextFrame_cam1.Draw(face, new Bgr(Color.Green), 3);
var predictedUserId = _recognizerEngine.RecognizeUser(new Image<Gray, byte>(nextFrame_cam1.Bitmap));
}
imageBox1.Image = nextFrame_cam1;
}
}
using (nextFrame_cam2)
{
if (nextFrame_cam2!= null)
{
Image<Gray, byte> grayframe = nextFrame_cam2.Convert<Gray, byte>();
var faces = _cascadeClassifier.DetectMultiScale(grayframe, 1.5, 10, Size.Empty, Size.Empty);
foreach (var face in faces)
{
nextFrame_cam2.Draw(face, new Bgr(Color.Green), 3);
var predictedUserId = _recognizerEngine.RecognizeUser(new Image<Gray, byte>(nextFrame_cam2.Bitmap));
}
imageBox2.Image = nextFrame_cam2;
}
}
using (nextFrame_cam3)
{
if (nextFrame_cam3!= null)
{
Image<Gray, byte> grayframe = nextFrame_cam3.Convert<Gray, byte>();
var faces = _cascadeClassifier.DetectMultiScale(grayframe, 1.5, 10, Size.Empty, Size.Empty);
foreach (var face in faces)
{
nextFrame_cam3.Draw(face, new Bgr(Color.Green), 3);
var predictedUserId = _recognizerEngine.RecognizeUser(new Image<Gray, byte>(nextFrame_cam3.Bitmap));
}
imageBox3.Image = nextFrame_cam3;
}
}
using (nextFrame_cam4)
{
if (nextFrame_cam4!= null)
{
Image<Gray, byte> grayframe = nextFrame_cam4.Convert<Gray, byte>();
var faces = _cascadeClassifier.DetectMultiScale(grayframe, 1.5, 10, Size.Empty, Size.Empty);
foreach (var face in faces)
{
nextFrame_cam4.Draw(face, new Bgr(Color.Green), 3);
var predictedUserId = _recognizerEngine.RecognizeUser(new Image<Gray, byte>(nextFrame_cam4.Bitmap));
}
imageBox4.Image = nextFrame_cam4;
}
}
}
}
Plz read this post to know what is memory leakage.
http://www.dotnetfunda.com/articles/show/625/best-practices-no-5-detecting-net-application-memory-leaks
Your error indicate that you are creating many instances of a class or any recursive call of function.
Use Using() to create object of Emgu so that as soon as your code terminate the managed or unmanaged memory will be disposed.
public partial class attendance : Form
{
private Capture cam1, cam2, cam3, cam4;
private CascadeClassifier _cascadeClassifier;
private RecognizerEngine _recognizerEngine;
private String _trainerDataPath = "\\traineddata_v2";
private readonly String dbpath = "Server=localhost;Database=faculty_attendance_system;Uid=root;Pwd=root;";
MySqlConnection conn;
public attendance()
{
InitializeComponent();
conn = new MySqlConnection("Server=localhost;Database=faculty_attendance_system;Uid=root;Pwd=root;");
}
private void btn_home_Click(object sender, EventArgs e)
{
this.Close();
}
private void attendance_Load(object sender, EventArgs e)
{
time_now.Start();
lbl_date.Text = DateTime.Now.ToString("");
_recognizerEngine = new RecognizerEngine(dbpath, _trainerDataPath);
_cascadeClassifier = new CascadeClassifier(Application.StartupPath + "/haarcascade_frontalface_default.xml");
cam1 = new Capture(0);
cam2 = new Capture(1);
cam3 = new Capture(3);
cam4 = new Capture(4);
Application.Idle += new EventHandler(ProcessFrame);
}
private void ProcessFrame(Object sender, EventArgs args)
{
using (Image<Bgr, byte> nextFrame_cam1 = cam1.QueryFrame().ToImage<Bgr, Byte>())
{
if (nextFrame_cam1 != null)
{
Image<Gray, byte> grayframe = nextFrame_cam1.Convert<Gray, byte>();
var faces = _cascadeClassifier.DetectMultiScale(grayframe, 1.5, 10, Size.Empty, Size.Empty);
foreach (var face in faces)
{
nextFrame_cam1.Draw(face, new Bgr(Color.Green), 3);
var predictedUserId = _recognizerEngine.RecognizeUser(new Image<Gray, byte>(nextFrame_cam1.Bitmap));
}
imageBox1.Image = nextFrame_cam1;
}
}
using (Image<Bgr, byte> nextFrame_cam2 = cam2.QueryFrame().ToImage<Bgr, Byte>())
{
if (nextFrame_cam2 != null)
{
Image<Gray, byte> grayframe = nextFrame_cam2.Convert<Gray, byte>();
var faces = _cascadeClassifier.DetectMultiScale(grayframe, 1.5, 10, Size.Empty, Size.Empty);
foreach (var face in faces)
{
nextFrame_cam2.Draw(face, new Bgr(Color.Green), 3);
var predictedUserId = _recognizerEngine.RecognizeUser(new Image<Gray, byte>(nextFrame_cam2.Bitmap));
}
imageBox2.Image = nextFrame_cam2;
}
}
using (Image<Bgr, byte> nextFrame_cam3 = cam3.QueryFrame().ToImage<Bgr, Byte>())
{
if (nextFrame_cam3 != null)
{
Image<Gray, byte> grayframe = nextFrame_cam3.Convert<Gray, byte>();
var faces = _cascadeClassifier.DetectMultiScale(grayframe, 1.5, 10, Size.Empty, Size.Empty);
foreach (var face in faces)
{
nextFrame_cam3.Draw(face, new Bgr(Color.Green), 3);
var predictedUserId = _recognizerEngine.RecognizeUser(new Image<Gray, byte>(nextFrame_cam3.Bitmap));
}
imageBox3.Image = nextFrame_cam3;
}
}
using (Image<Bgr, byte> nextFrame_cam4 = cam4.QueryFrame().ToImage<Bgr, Byte>())
{
if (nextFrame_cam4 != null)
{
Image<Gray, byte> grayframe = nextFrame_cam4.Convert<Gray, byte>();
var faces = _cascadeClassifier.DetectMultiScale(grayframe, 1.5, 10, Size.Empty, Size.Empty);
foreach (var face in faces)
{
nextFrame_cam4.Draw(face, new Bgr(Color.Green), 3);
var predictedUserId = _recognizerEngine.RecognizeUser(new Image<Gray, byte>(nextFrame_cam4.Bitmap));
}
imageBox4.Image = nextFrame_cam4;
}
}
}
}
Plz fowllow this document for standard way to work with EMGU.CV for face recongnization.
http://www.emgu.com/wiki/index.php/Face_detection
Currently I'm trying to write a small programm detecting faces.
I want to cut the grayframe for detection into pieces of the size of the rectangles of the detected faces.
This is my method for the operation:
public List<PreviewImage> GetDetectedSnippets(Capture capture, ProcessType processType)
{
var mat = capture?.QueryFrame();
var imageList = new List<PreviewImage>();
if (mat == null)
return imageList;
var imageframe = mat.ToImage<Bgr, byte>();
var grayframe = imageframe.Convert<Gray, byte>();
Rectangle[] faces = null;
try
{
switch (processType)
{
case ProcessType.Front:
{
faces = _cascadeFrontDefault.DetectMultiScale(grayframe, 1.25, 10, Size.Empty);
}
break;
case ProcessType.Profile:
{
faces = _cascadeProfileFace.DetectMultiScale(grayframe, 1.25, 10, Size.Empty);
}
break;
default:
{
return imageList;
}
}
}
catch (Exception ex)
{
Debug.WriteLine("Could not process snapshot: " + ex);
return imageList;
}
foreach (var face in faces)
{
var detectedImage = imageframe.Clone();
detectedImage.Draw(face, new Bgr(Color.BlueViolet), 4);
var detectedGrayframe = grayframe.GrabCut(face, 1); // This isn't working. Here should the grayframe be cutted into a smaller piece.
imageList.Add(new PreviewImage(detectedImage, detectedGrayframe));
}
return imageList;
}
And this is the previewImage class:
public class PreviewImage
{
public Image<Bgr, byte> Original { get; }
public Image<Gray, byte> Grayframe { get; }
public PreviewImage(Image<Bgr, byte> original, Image<Gray, byte> grayframe)
{
Original = original;
Grayframe = grayframe;
}
}
How can I cut the grayframe into a piece with the size of the given rectangle?
This will do the work:
grayframe.ROI = face;
var detectedGrayframe = grayframe.Copy();
grayframe.ROI = Rectange.Empty;
I am doing a project on panoramic stitching of Images using Emgu CV (Open CV for C#). Till now I have done some work that stitches images but the output is kinda weird. This is what I am getting:
My panorama:
This is what the Emgu CV Stitcher.stitch method gives:
Stiched by inbuilt stitcher
Clearly I am missing something. Moreover if I add more images, the output gets more stretchy like this one:
I am not able to figure out what am i missing. Here is my code till now:
http://pastebin.com/Ke2Zz4m9
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Threading.Tasks;
using System.Windows.Forms;
using Emgu.CV;
using Emgu.CV.CvEnum;
using Emgu.CV.Features2D;
using Emgu.CV.Structure;
using Emgu.CV.UI;
using Emgu.CV.Util;
using Emgu.CV.GPU;
namespace Project
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
}
private void Form1_Load(object sender, EventArgs e)
{
Image<Bgr, float> one = new Image<Bgr, float>("D:\\Venice_panorama_part_01.jpg");
Image<Bgr, float> two = new Image<Bgr, float>("D:\\Venice_panorama_part_02.jpg");
Image<Bgr, float> third = new Image<Bgr, float>("D:\\Venice_panorama_part_03.jpg");
Image<Bgr, float> fourth = new Image<Bgr, float>("D:\\Venice_panorama_part_04.jpg");
Image<Bgr, float> fifth = new Image<Bgr, float>("D:\\Venice_panorama_part_05.jpg");
Image<Bgr, float> sixth = new Image<Bgr, float>("D:\\Venice_panorama_part_06.jpg");
Image<Bgr, float> seventh = new Image<Bgr, float>("D:\\Venice_panorama_part_07.jpg");
Image<Bgr, float> eighth = new Image<Bgr, float>("D:\\Venice_panorama_part_08.jpg");
Image<Bgr, Byte> result = FindMatch(two, third);
result = convert(result);
Image<Bgr, float> twoPlusThree = result.Convert<Bgr, float>();
Image<Bgr, Byte> result2 = FindMatch(fourth, fifth);
result2 = convert(result2);
Image<Bgr, float> fourPlusFive = result2.Convert<Bgr, float>();
Image<Bgr, Byte> result3 = FindMatch(sixth, seventh);
result3 = convert(result3);
Image<Bgr, float> sixPlusSeven = result3.Convert<Bgr, float>();
Image<Bgr, Byte> result4 = FindMatch(one, twoPlusThree);
result4 = convert(result4);
Image<Bgr, float> oneTwoThree = result4.Convert<Bgr, float>();
Image<Bgr, Byte> result5 = FindMatch(oneTwoThree, fourPlusFive);
result5 = convert(result5);
Image<Bgr, float> oneTwoThreeFourFive = result5.Convert<Bgr, float>();
Image<Bgr, Byte> result6 = FindMatch(sixPlusSeven, eighth);
result6 = convert(result6);
Image<Bgr, float> sixSevenEigth = result6.Convert<Bgr, float>();
Image<Bgr, Byte> result7 = FindMatch(oneTwoThreeFourFive, sixSevenEigth);
result7 = convert(result7);
result.Save("D:\\result1.jpg");
result2.Save("D:\\result2.jpg");
result3.Save("D:\\result3.jpg");
result4.Save("D:\\result4.jpg");
result5.Save("D:\\result5.jpg");
result6.Save("D:\\result6.jpg");
result7.Save("D:\\result7.jpg");
this.Close();
}
public static Image<Bgr, Byte> FindMatch(Image<Bgr, float> fImage, Image<Bgr, float> lImage)
{
HomographyMatrix homography = null;
SURFDetector surfCPU = new SURFDetector(500, false);
int k = 2;
double uniquenessThreshold = 0.8;
Matrix<int> indices;
Matrix<byte> mask;
VectorOfKeyPoint modelKeyPoints;
VectorOfKeyPoint observedKeyPoints;
Image<Gray, Byte> fImageG = fImage.Convert<Gray, Byte>();
Image<Gray, Byte> lImageG = lImage.Convert<Gray, Byte>();
if (GpuInvoke.HasCuda)
{
GpuSURFDetector surfGPU = new GpuSURFDetector(surfCPU.SURFParams, 0.01f);
using (GpuImage<Gray, Byte> gpuModelImage = new GpuImage<Gray, byte>(fImageG))
//extract features from the object image
using (GpuMat<float> gpuModelKeyPoints = surfGPU.DetectKeyPointsRaw(gpuModelImage, null))
using (GpuMat<float> gpuModelDescriptors = surfGPU.ComputeDescriptorsRaw(gpuModelImage, null, gpuModelKeyPoints))
using (GpuBruteForceMatcher<float> matcher = new GpuBruteForceMatcher<float>(DistanceType.L2))
{
modelKeyPoints = new VectorOfKeyPoint();
surfGPU.DownloadKeypoints(gpuModelKeyPoints, modelKeyPoints);
// extract features from the observed image
using (GpuImage<Gray, Byte> gpuObservedImage = new GpuImage<Gray, byte>(lImageG))
using (GpuMat<float> gpuObservedKeyPoints = surfGPU.DetectKeyPointsRaw(gpuObservedImage, null))
using (GpuMat<float> gpuObservedDescriptors = surfGPU.ComputeDescriptorsRaw(gpuObservedImage, null, gpuObservedKeyPoints))
using (GpuMat<int> gpuMatchIndices = new GpuMat<int>(gpuObservedDescriptors.Size.Height, k, 1, true))
using (GpuMat<float> gpuMatchDist = new GpuMat<float>(gpuObservedDescriptors.Size.Height, k, 1, true))
using (GpuMat<Byte> gpuMask = new GpuMat<byte>(gpuMatchIndices.Size.Height, 1, 1))
using (Stream stream = new Stream())
{
matcher.KnnMatchSingle(gpuObservedDescriptors, gpuModelDescriptors, gpuMatchIndices, gpuMatchDist, k, null, stream);
indices = new Matrix<int>(gpuMatchIndices.Size);
mask = new Matrix<byte>(gpuMask.Size);
//gpu implementation of voteForUniquess
using (GpuMat<float> col0 = gpuMatchDist.Col(0))
using (GpuMat<float> col1 = gpuMatchDist.Col(1))
{
GpuInvoke.Multiply(col1, new MCvScalar(uniquenessThreshold), col1, stream);
GpuInvoke.Compare(col0, col1, gpuMask, CMP_TYPE.CV_CMP_LE, stream);
}
observedKeyPoints = new VectorOfKeyPoint();
surfGPU.DownloadKeypoints(gpuObservedKeyPoints, observedKeyPoints);
//wait for the stream to complete its tasks
//We can perform some other CPU intesive stuffs here while we are waiting for the stream to complete.
stream.WaitForCompletion();
gpuMask.Download(mask);
gpuMatchIndices.Download(indices);
if (GpuInvoke.CountNonZero(gpuMask) >= 4)
{
int nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 2);
}
}
}
}
else
{
//extract features from the object image
modelKeyPoints = new VectorOfKeyPoint();
Matrix<float> modelDescriptors = surfCPU.DetectAndCompute(fImageG, null, modelKeyPoints);
// extract features from the observed image
observedKeyPoints = new VectorOfKeyPoint();
Matrix<float> observedDescriptors = surfCPU.DetectAndCompute(lImageG, null, observedKeyPoints);
BruteForceMatcher<float> matcher = new BruteForceMatcher<float>(DistanceType.L2);
matcher.Add(modelDescriptors);
indices = new Matrix<int>(observedDescriptors.Rows, k);
using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
{
matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
mask = new Matrix<byte>(dist.Rows, 1);
mask.SetValue(255);
Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);
}
int nonZeroCount = CvInvoke.cvCountNonZero(mask);
if (nonZeroCount >= 4)
{
nonZeroCount = Features2DToolbox.VoteForSizeAndOrientation(modelKeyPoints, observedKeyPoints, indices, mask, 1.5, 20);
if (nonZeroCount >= 4)
homography = Features2DToolbox.GetHomographyMatrixFromMatchedFeatures(modelKeyPoints, observedKeyPoints, indices, mask, 2);
}
}
Image<Bgr, Byte> mImage = fImage.Convert<Bgr, Byte>();
Image<Bgr, Byte> oImage = lImage.Convert<Bgr, Byte>();
Image<Bgr, Byte> result = new Image<Bgr, byte>(mImage.Width + oImage.Width, mImage.Height);
if (homography != null)
{ //draw a rectangle along the projected model
Rectangle rect = fImage.ROI;
PointF[] pts = new PointF[] {
new PointF(rect.Left, rect.Bottom),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Right, rect.Top),
new PointF(rect.Left, rect.Top)};
homography.ProjectPoints(pts);
HomographyMatrix origin = new HomographyMatrix(); //I perform a copy of the left image with a not real shift operation on the origin
origin.SetIdentity();
origin.Data[0, 2] = 0;
origin.Data[1, 2] = 0;
Image<Bgr, Byte> mosaic = new Image<Bgr, byte>(mImage.Width + oImage.Width + 2000, mImage.Height*2);
Image<Bgr, byte> warp_image = mosaic.Clone();
mosaic = mImage.WarpPerspective(origin, mosaic.Width, mosaic.Height, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR, Emgu.CV.CvEnum.WARP.CV_WARP_DEFAULT, new Bgr(0, 0, 0));
warp_image = oImage.WarpPerspective(homography, warp_image.Width, warp_image.Height, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR, Emgu.CV.CvEnum.WARP.CV_WARP_INVERSE_MAP, new Bgr(200, 0, 0));
Image<Gray, byte> warp_image_mask = oImage.Convert<Gray, byte>();
warp_image_mask.SetValue(new Gray(255));
Image<Gray, byte> warp_mosaic_mask = mosaic.Convert<Gray, byte>();
warp_mosaic_mask.SetZero();
warp_mosaic_mask = warp_image_mask.WarpPerspective(homography, warp_mosaic_mask.Width, warp_mosaic_mask.Height, Emgu.CV.CvEnum.INTER.CV_INTER_LINEAR, Emgu.CV.CvEnum.WARP.CV_WARP_INVERSE_MAP, new Gray(0));
warp_image.Copy(mosaic, warp_mosaic_mask);
return mosaic;
}
return null;
}
private Image<Bgr, Byte> convert(Image<Bgr, Byte> img)
{
Image<Gray, byte> imgGray = img.Convert<Gray, byte>();
Image<Gray, byte> mask = imgGray.CopyBlank();
Contour<Point> largestContour = null;
double largestarea = 0;
for (var contours = imgGray.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_EXTERNAL); contours != null; contours = contours.HNext)
{
if (contours.Area > largestarea)
{
largestarea = contours.Area;
largestContour = contours;
}
}
CvInvoke.cvSetImageROI(img, largestContour.BoundingRectangle);
return img;
}
}
}
Actually there is nothing wrong with your code, and this image is totally correct. Please notice when you stitch all the images together, you are taking the first(left) image as a reference plane and set it as the front direction, all the subsequent images that are originally oriented to the right direction were projected to a plane on the front. Think of you are sitting inside a room, the wall in front of you appears rectangular while the one on your right side may look trapezoidal. This is because of the so-called "perspective distortion"/homography, and the larger the horizontal angle of view, the more noticeable this phenomenon.
So if one intend to stitch a series of images that covers a wide angle of view, he typically tries the cylindrical or spherical surface instead of a planar surface. You may find this option by searching the reference manual.