OpenCv: Finding multiple matches - c#

I have the following, but I can't figure out how to find ALL the matches in a source image.
static void Main()
{
using (var template = Cv.LoadImage(#"images\logo.png", LoadMode.GrayScale))
using (var source = Cv.LoadImage(#"images\manyLogos.png", LoadMode.GrayScale))
using (var sourceColour = Cv.LoadImage(#"images\manyLogos.png", LoadMode.Color))
{
var width = source.Width - template.Width + 1;
var height = source.Height - template.Height + 1;
using (var result = Cv.CreateImage(Cv.Size(width, height), BitDepth.F32, 1))
{
Cv.MatchTemplate(source, template, result, MatchTemplateMethod.SqDiff);
var THRESHOLD = 0.08D;
double minVal, maxVal;
CvPoint minLoc, maxLoc;
Cv.MinMaxLoc(result, out minVal, out maxVal, out minLoc, out maxLoc);
var outlineColor = (minVal > THRESHOLD) ? CvColor.Green : CvColor.Red;
Cv.Rectangle(sourceColour, Cv.Point(minLoc.X, minLoc.Y), Cv.Point(minLoc.X + template.Width, minLoc.Y + template.Height), outlineColor, 1, 0, 0);
}
using (var window = new CvWindow("Test"))
{
while (CvWindow.WaitKey(10) < 0)
{
window.Image = sourceColour;
}
}
}
}
I can outline the best match, just not all the matches. I need to get all the matches somehow.

Using matchTemplate method, your output image will give you pixels values which represents how well your template is matched at this specific location. In you case, the lower the value, the best the match, since you used MatchTemplateMethod.SqDiff.
You problem is that when you use the minMaxLoc function, you get what you asks for, which is the best match in this case, the min).
All matches are the pixels whose value are under the threshold that you set up.
Since I'm not used to csharp, here is how it would go in C++, you can do the translation:
// after your call to MatchTemplate
float threshold = 0.08;
cv::Mat thresholdedImage;
cv::threshold(result, thresholdedImage, threshold, 255, CV_THRESH_BINARY);
// the above will set pixels to 0 in thresholdedImage if their value in result is lower than the threshold, to 255 if it is larger.
// in C++ it could also be written cv::Mat thresholdedImage = result < threshold;
// Now loop over pixels of thresholdedImage, and draw your matches
for (int r = 0; r < thresholdedImage.rows; ++r) {
for (int c = 0; c < thresholdedImage.cols; ++c) {
if (!thresholdedImage.at<unsigned char>(r, c)) // = thresholdedImage(r,c) == 0
cv::circle(sourceColor, cv::Point(c, r), template.cols/2, CV_RGB(0,255,0), 1);
}
}

Translating from C++ and using OpenCvSharp wrapper, the above code, replacing minMaxLoc lines, worked for me:
double threshold=0.9
var thresholdImage=Cv.CreateImage(newImageSize, BitDepth.F32,1);
Cv.Threshold(result, thresholdImage, threshold, 255, ThresholdType.Binary);
for (int r = 0; r < thresholdImage.GetSize().Height; r++)
{
for (int c = 0; c < thresholdImage.GetSize().Width; c++)
{
if (thresholdImage.GetRow(r)[c].Val0 > 0)
{
Cv.Rectangle(soruceColour, Cv.Point(c, r), Cv.Point(c + template.Width, r + template.Height), CvColor.Red, 1, 0, 0);
}
}
}

here is the solution using Min_Max and Match_Template methods. hope it will help.
public void multipleTemplateMatch(string SourceImages, string tempImage)
{
Image<Bgr, byte> image_source = new Image<Bgr, byte>(SourceImages);
Image<Bgr, byte> image_partial1 = new Image<Bgr, byte>(tempImage);
double threshold = 0.9;
ImageFinder imageFinder = new ImageFinder(image_source, image_partial1, threshold);
imageFinder.FindThenShow();
}
and here is the class which will help.
class ImageFinder
{
private List<Rectangle> rectangles;
public Image<Bgr, byte> BaseImage { get; set; }
public Image<Bgr, byte> SubImage { get; set; }
public Image<Bgr, byte> ResultImage { get; set; }
public double Threashold { get; set; }
public List<Rectangle> Rectangles
{
get { return rectangles; }
}
public ImageFinder(Image<Bgr, byte> baseImage, Image<Bgr, byte> subImage, double threashold)
{
rectangles = new List<Rectangle>();
BaseImage = baseImage;
SubImage = subImage;
Threashold = threashold;
}
public void FindThenShow()
{
FindImage();
DrawRectanglesOnImage();
ShowImage();
}
public void DrawRectanglesOnImage()
{
ResultImage = BaseImage.Copy();
foreach (var rectangle in this.rectangles)
{
ResultImage.Draw(rectangle, new Bgr(Color.Blue), 1);
}
}
public void FindImage()
{
rectangles = new List<Rectangle>();
using (Image<Bgr, byte> imgSrc = BaseImage.Copy())
{
while (true)
{
using (Image<Gray, float> result = imgSrc.MatchTemplate(SubImage, TemplateMatchingType.CcoeffNormed))
{
double[] minValues, maxValues;
Point[] minLocations, maxLocations;
result.MinMax(out minValues, out maxValues, out minLocations, out maxLocations);
if (maxValues[0] > Threashold)
{
Rectangle match = new Rectangle(maxLocations[0], SubImage.Size);
imgSrc.Draw(match, new Bgr(Color.Blue), -1);
rectangles.Add(match);
}
else
{
break;
}
}
}
}
}
public void ShowImage()
{
Random rNo = new Random();
string outFilename = "matched Templates" + rNo.Next();
CvInvoke.Imshow(outFilename, ResultImage);
}
}
if you find this helpful please vote is as useful.
thanks

Related

How to compare colors on image and crop the diff?

In order to perform visual tests under Selenium, I performed the image comparison tests ( 2 images only). I use the file size to see if there is a difference or not. However, nothing tells me where I have this difference and I would like to be able to display the difference(s) shown on the image.
I was thinking of a comparison by color instead of size. It seems complex to me especially since I would like to have an image output showing the difference (with a crop that specifies the area) or by extracting the pixels impacted by this difference. Do you think it's possible to do it under selenium in C#? For the moment, i tried by size.
public static void TestComapre()
{
string imgPath1 = <//PATHNAME >
string imgPath2 = <//PATHNAME >
const int size = 1000;
var len = new FileInfo(imgPath1).Length;
if (len != new FileInfo(imgPath2).Length)
var s1 = File.OpenRead(imgPath1);
var s2 = File.OpenRead(imgPath2);
var buf1 = new byte[size];
var buf2 = new byte[size];
for (int i = 0; i < len / size; i++)
{
s1.Read(buf1, 0, size);
s2.Read(buf2, 0, size);
if (CompareBuffers(buf1, buf2) == false)
Assert.Fail();
}
Assert.True(true);
}
I have a custom made image comparer in C#.
It compares 2 images, ignores magenta pixels (you can use magenta as a MASK for areas to ignore when comparing) and marks different pixels as blue in a new image
////////////////////// VARIABLES //////////////////////////
private string pathReferenceImg;
private string pathTestImg;
private FileInfo fReferenceFile;
private FileInfo fTestFile;
private Bitmap referenceImage;
private Bitmap testImage;
private int areaToCompareWidth;
private int areaToCompareHeight;
public int xMinAreaToCompare = 0;
public int yMinAreaToCompare = 0;
public int pixelDifferenceQuantity = 0;
public List<Point> differentPixelsList = new List<Point>();
private int[] rgbArrayTestImgWithReferenceImgPink;
private int tolerance = 15;
public bool result = false;
////////////////////// CODE //////////////////////////
public void compareFiles(string pathReferenceImg, string pathTestImg)
{
fReferenceFile = new FileInfo(pathReferenceImg);
fTestFile = new FileInfo(pathTestImg);
referenceImage = new Bitmap(pathReferenceImg);
testImage = new Bitmap(pathTestImg);
areaToCompareWidth = referenceImage.Width;
areaToCompareHeight = referenceImage.Height;
while (xMinAreaToCompare < areaToCompareWidth)
{
Color colorRef = referenceImage.GetPixel(xMinAreaToCompare, yMinAreaToCompare);
Color colorTest = testImage.GetPixel(xMinAreaToCompare, yMinAreaToCompare);
//Magenta = 255R,255B,0G
if (colorRef.ToArgb() != Color.Magenta.ToArgb())
{
if (colorRef != colorTest)
{
pixelDifferenceQuantity++;
differentPixelsList.Add(new Point(xMinAreaToCompare, yMinAreaToCompare));
}
}
yMinAreaToCompare ++;
if (yMinAreaToCompare == areaToCompareHeight)
{
xMinAreaToCompare ++;
yMinAreaToCompare = 1;
}
}
if (pixelDifferenceQuantity >= tolerance)
{
Bitmap resultImage = new Bitmap(testImage);
foreach (Point pixel in differentPixelsList)
{
resultImage.SetPixel(pixel.X, pixel.Y, Color.Blue);
}
resultImage.Save(pathTestImg.Replace("TestFolder", "ResultFolder"));
}
else
{
result = true;
}
}
hope it helps.

Calculate actual velocity using optical flow Lucas-Kanade and EmguCV

How to calculate the actual speed of all moving objects on video using the Lucals-kanade algorithm to calculate the optical flow ?
I need to do this on this video. The camera is fixed in one place (Fig. 1)
I find the key points and track them using the Lucas-Kanade algorithm (Fig. 2)
How to use this algorithm to update the actual speed of each car?
Thank you for your answers!
My code:
public class OpticalFlowLK : BaseFilter
{
private Mat prevFrame;
private Mat nextFrame;
private bool prevFrameEmpty = true;
private GFTTDetector gFTTDetector;
private Stopwatch sWatch;
private double time = 0.04;
public OpticalFlowLK()
{
TAG = "[Optical Flow Lucas Kanade]";
gFTTDetector = new GFTTDetector(500);
sWatch = new Stopwatch();
}
protected override Mat ProcessFrame(ref Mat frame)
{
Mat rez = new Mat();
frame.CopyTo(rez);
nextFrame = new Mat();
Mat gray = new Mat();
var tmpImg = gray.ToImage<Gray, Byte>();
CvInvoke.CvtColor(frame, nextFrame, Emgu.CV.CvEnum.ColorConversion.Bgr2Gray);
if (!prevFrameEmpty)
{
VectorOfKeyPoint prevFeatures = new VectorOfKeyPoint(gFTTDetector.Detect(prevFrame));
//Features2DToolbox.DrawKeypoints(rez, prevFeatures, rez, new Bgr(0, 0, 255));
PointF[] prevPts = new PointF[prevFeatures.Size];
for (int i = 0; i < prevFeatures.Size; i++)
{
prevPts[i] = prevFeatures[i].Point;
}
PointF[] nextPts;
byte[] status;
float[] errors;
sWatch.Start();
CvInvoke.CalcOpticalFlowPyrLK(prevFrame, nextFrame, prevPts, new Size(20, 20), 1, new MCvTermCriteria(20, 0.03), out nextPts, out status, out errors);
sWatch.Stop();
sWatch.Reset();
prevFrame = nextFrame.Clone();
for (int i = 0; i < status.Length; i++)
{
Point prevPt = new Point((int)prevPts[i].X,(int)nextPts[i].Y);
Point nextPt = new Point((int)nextPts[i].X,(int)nextPts[i].Y);
double lenght = Math.Sqrt(Math.Pow(prevPt.X - nextPt.X, 2) + Math.Pow(prevPt.Y - nextPt.Y, 2));
if (lenght > 3)
{
CvInvoke.Circle(rez, nextPt, 1, new MCvScalar(0, 255, 0), 2);
}
}
sWatch.Stop();
prevFrameEmpty = false;
}
else if (prevFrameEmpty)
{
prevFrame = nextFrame.Clone();
prevFrameEmpty = false;
}
return rez;
}
protected override bool InitFilter(ref Mat frame)
{
throw new NotImplementedException();
}
}

How to detect rectangles in image with emgu cv?

I am trying to detect rectangles in Emgucv in c#, I was playing around with the following code that I got off the internet. I am new to this so I hope someone could help me out:
public class ShapeDectection
{
public Image<Bgr, Byte> img;
public PictureBox Picture;
public PictureBox Result;
public double dCannyThres;
private UMat uimage;
private UMat cannyEdges;
private List<Triangle2DF> triangleList;
private List<RotatedRect> boxList;
private Image<Bgr, Byte> triangleRectImage;
public ShapeDectection(PictureBox pic, string filepath, PictureBox results)
{
Picture = pic;
Result = results;
img = new Image<Bgr, Byte>(filepath);
triangleList = new List<Triangle2DF>();
boxList = new List<RotatedRect>();
uimage = new UMat();
cannyEdges = new UMat();
dCannyThres = 180.0;
fnFindTriangleRect();
MessageBox.Show("done");
}
private void fnFindTriangleRect()
{
CvInvoke.CvtColor(img, uimage, ColorConversion.Bgr2Gray);
UMat pyrDown = new UMat();
CvInvoke.PyrDown(uimage, pyrDown);
CvInvoke.PyrUp(pyrDown, uimage);
triangleRectImage = img.CopyBlank();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(cannyEdges, contours, null, RetrType.List, ChainApproxMethod.ChainApproxSimple);
int count = contours.Size;
MessageBox.Show("count = " + count);
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour, CvInvoke.ArcLength(contour, true) * 0.05, true);
if (CvInvoke.ContourArea(approxContour, false) > 250) //only consider contour with area > 250
{
MessageBox.Show("approxContour.Size = " + approxContour.Size);
if (approxContour.Size == 3) //The contour has 3 vertices, is a triangle
{
Point[] pts = approxContour.ToArray();
triangleList.Add(new Triangle2DF(pts[0], pts[1], pts[2]));
}
else if (approxContour.Size == 4) // The contour has 4 vertices
{
#region Determine if all the angles in the contours are within [80,100] degree
bool isRectangle = true;
Point[] pts = approxContour.ToArray();
LineSegment2D[] edges = PointCollection.PolyLine(pts, true);
for (int j = 0; j < edges.Length; j++)
{
double dAngle = Math.Abs(edges[(j + 1) % edges.Length].GetExteriorAngleDegree(edges[j]));
MessageBox.Show("" + dAngle);
if (dAngle < 80 || dAngle > 100)
{
isRectangle = false;
break;
}
}
#endregion
if (isRectangle) boxList.Add(CvInvoke.MinAreaRect(approxContour));
}
}
}
}
}
foreach (Triangle2DF triangle in triangleList)
{
triangleRectImage.Draw(triangle, new Bgr(Color.DarkBlue), 2);
}
foreach (RotatedRect box in boxList)
triangleRectImage.Draw(box, new Bgr(Color.Red), 2);
Result.Image = triangleRectImage.ToBitmap();
}
}
I am trying to detect the shapes in the following picture
however this is the result:
as you can see no shape was detected by the script as the number of contours was zero.how can I modify the script so that I can detect these shapes?
I can't seem to find "Canny edge detection"
UMat cannyEdges = new UMat();
CvInvoke.Canny(uimage, cannyEdges, cannyThreshold, cannyThresholdLinking);
in your code. Here is the link to EmguCV wiki page http://www.emgu.com/wiki/index.php/Shape_(Triangle,_Rectangle,_Circle,_Line)_Detection_in_CSharp
Check there for missing parts.

DrawImage fails to position sliced images correctly

For a couple of days now I've tried to figure out why my nine-slice code does not work as expected. As far as I can see, there seems to be an issue with the Graphics.DrawImage method which handles my nine slice images incorrectly. So my problem is how to compensate for the incorrect scaling that is performed when running my code on the compact framework. I might add that this code of course works perfectly when running in the full framework environment. The problem only occurs when scaling the image to a larger image not the other way around. Here is the snippet:
public class NineSliceBitmapSnippet
{
private Bitmap m_OriginalBitmap;
public int CornerLength { get; set; }
/// <summary>
/// Initializes a new instance of the NineSliceBitmapSnippet class.
/// </summary>
public NineSliceBitmapSnippet(Bitmap bitmap)
{
CornerLength = 5;
m_OriginalBitmap = bitmap;
}
public Bitmap ScaleSingleBitmap(Size size)
{
Bitmap scaledBitmap = new Bitmap(size.Width, size.Height);
int[] horizontalTargetSlices = Slice(size.Width);
int[] verticalTargetSlices = Slice(size.Height);
int[] horizontalSourceSlices = Slice(m_OriginalBitmap.Width);
int[] verticalSourceSlices = Slice(m_OriginalBitmap.Height);
using (Graphics graphics = Graphics.FromImage(scaledBitmap))
{
using (Brush brush = new SolidBrush(Color.Fuchsia))
{
graphics.FillRectangle(brush, new Rectangle(0, 0, size.Width, size.Height));
}
int horizontalTargetOffset = 0;
int verticalTargetOffset = 0;
int horizontalSourceOffset = 0;
int verticalSourceOffset = 0;
for (int x = 0; x < horizontalTargetSlices.Length; x++)
{
verticalTargetOffset = 0;
verticalSourceOffset = 0;
for (int y = 0; y < verticalTargetSlices.Length; y++)
{
Rectangle destination = new Rectangle(horizontalTargetOffset, verticalTargetOffset, horizontalTargetSlices[x], verticalTargetSlices[y]);
Rectangle source = new Rectangle(horizontalSourceOffset, verticalSourceOffset, horizontalSourceSlices[x], verticalSourceSlices[y]);
graphics.DrawImage(m_OriginalBitmap, destination, source, GraphicsUnit.Pixel);
verticalTargetOffset += verticalTargetSlices[y];
verticalSourceOffset += verticalSourceSlices[y];
}
horizontalTargetOffset += horizontalTargetSlices[x];
horizontalSourceOffset += horizontalSourceSlices[x];
}
}
return scaledBitmap;
}
public int[] Slice(int length)
{
int cornerLength = CornerLength;
if (length <= (cornerLength * 2))
throw new Exception("Image to small for sliceing up");
int[] slices = new int[3];
slices[0] = cornerLength;
slices[1] = length - (2 * cornerLength);
slices[2] = cornerLength;
return slices;
}
}
So, my question is, does anybody now how I could compensate the incorrect scaling?
/Dan
After some more trial and error I've finally found a solution to my problem. The scaling problems has always been to the top-center, right-center, bottom-center and left-center slices since they're always stretched in only one direction according to the logic of nine slice scaling. If I apply a temporarely square stretch to those slices before applying the correct stretch the final bitmap will be correct. Once again the problem is only visible in the .Net Compact Framework of a Windows CE device (Smart Device). Here's a snippet with code adjusting for the bug in CF. My only concern now is that the slices that get square stretched will take much more memory due to the correction code. On the other hand this step is only a short period of time so I might get away with it. ;)
public class NineSliceBitmapSnippet
{
private Bitmap m_OriginalBitmap;
public int CornerLength { get; set; }
public NineSliceBitmapSnippet(Bitmap bitmap)
{
CornerLength = 5;
m_OriginalBitmap = bitmap;
}
public Bitmap Scale(Size size)
{
if (m_OriginalBitmap != null)
{
return ScaleSingleBitmap(size);
}
return null;
}
public Bitmap ScaleSingleBitmap(Size size)
{
Bitmap scaledBitmap = new Bitmap(size.Width, size.Height);
int[] horizontalTargetSlices = Slice(size.Width);
int[] verticalTargetSlices = Slice(size.Height);
int[] horizontalSourceSlices = Slice(m_OriginalBitmap.Width);
int[] verticalSourceSlices = Slice(m_OriginalBitmap.Height);
using (Graphics graphics = Graphics.FromImage(scaledBitmap))
{
using (Brush brush = new SolidBrush(Color.Fuchsia))
{
graphics.FillRectangle(brush, new Rectangle(0, 0, size.Width, size.Height));
}
int horizontalTargetOffset = 0;
int verticalTargetOffset = 0;
int horizontalSourceOffset = 0;
int verticalSourceOffset = 0;
for (int x = 0; x < horizontalTargetSlices.Length; x++)
{
verticalTargetOffset = 0;
verticalSourceOffset = 0;
for (int y = 0; y < verticalTargetSlices.Length; y++)
{
Rectangle destination = new Rectangle(horizontalTargetOffset, verticalTargetOffset, horizontalTargetSlices[x], verticalTargetSlices[y]);
Rectangle source = new Rectangle(horizontalSourceOffset, verticalSourceOffset, horizontalSourceSlices[x], verticalSourceSlices[y]);
bool isWidthAffectedByVerticalStretch = (y == 1 && (x == 0 || x == 2) && destination.Height > source.Height);
bool isHeightAffectedByHorizontalStretch = (x == 1 && (y == 0 || y == 2) && destination.Width > source.Width);
if (isHeightAffectedByHorizontalStretch)
{
BypassDrawImageError(graphics, destination, source, Orientation.Horizontal);
}
else if (isWidthAffectedByVerticalStretch)
{
BypassDrawImageError(graphics, destination, source, Orientation.Vertical);
}
else
{
graphics.DrawImage(m_OriginalBitmap, destination, source, GraphicsUnit.Pixel);
}
verticalTargetOffset += verticalTargetSlices[y];
verticalSourceOffset += verticalSourceSlices[y];
}
horizontalTargetOffset += horizontalTargetSlices[x];
horizontalSourceOffset += horizontalSourceSlices[x];
}
}
return scaledBitmap;
}
private void BypassDrawImageError(Graphics graphics, Rectangle destination, Rectangle source, Orientation orientationAdjustment)
{
Size adjustedSize = Size.Empty;
switch (orientationAdjustment)
{
case Orientation.Horizontal:
adjustedSize = new Size(destination.Width, destination.Width);
break;
case Orientation.Vertical:
adjustedSize = new Size(destination.Height, destination.Height);
break;
default:
break;
}
using (Bitmap quadScaledBitmap = new Bitmap(adjustedSize.Width, adjustedSize.Height))
{
using (Graphics tempGraphics = Graphics.FromImage(quadScaledBitmap))
{
tempGraphics.Clear(Color.Fuchsia);
tempGraphics.DrawImage(m_OriginalBitmap, new Rectangle(0, 0, adjustedSize.Width, adjustedSize.Height), source, GraphicsUnit.Pixel);
}
graphics.DrawImage(quadScaledBitmap, destination, new Rectangle(0, 0, quadScaledBitmap.Width, quadScaledBitmap.Height), GraphicsUnit.Pixel);
}
}
public int[] Slice(int length)
{
int cornerLength = CornerLength;
if (length <= (cornerLength * 2))
throw new Exception("Image to small for sliceing up");
int[] slices = new int[3];
slices[0] = cornerLength;
slices[1] = length - (2 * cornerLength);
slices[2] = cornerLength;
return slices;
}
}

emgu cv CvInvoke.cvRemap hangs when trying to udistort from stereo calibration data

I'm trying to implement a stereo camera calibration app using emgu cv.
My problem is when I try to use CvInvoke.cvRemap to undistort an image the function just hangs. No errors or crashes, it just hangs and I've left it for 2 hours in case it was just being slow. Here's what I'm doing:
Capturing 10 pairs of Chessboard samples (left and right), making sure FindChessboardCorners works on each. I'm not doing anything special to sync the cameras just capturing them at the same time.
Generate set of object points based off the chessboard used.
Doing a separate CalibrateCamera on the left and right images of each sample using the object points from 2 and the image points from 1.
Doing a StereoCalibrate using the IntrinsicCameraParameters generated by CalibrateCamera in 3, the object points in 2, and the image points captured from the chessboards in 1.
Doing a StereoRectify using the IntrinsicCameraParameters from 3/4.
Generating mapx and mapy for both left and right from cvInitUndistortRectifyMap using output from 5.
Attempting to cvRemap using mapx and mapy from 6 and fresh images captured from the cameras.
NEXT: Use StereoBM.FindStereoCorrespondence and PointCollection.ReprojectImageTo3D to generate a point cloud from my hopefully calibrated stereo data.
So when I get to 7 cvRemap just hangs. I've gotton cvRemap to work capturing from a single camera though so I know the function is working to some degree with my setup.
I've written a class to manage multiple cameras:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Drawing;
using System.Drawing.Drawing2D;
using System.Windows.Forms;
using Emgu.CV;
using Emgu.CV.UI;
using Emgu.CV.CvEnum;
using Emgu.CV.Structure;
using Emgu.CV.VideoSurveillance;
namespace Capture2Cams
{
class camData
{
public Capture capture;
public Image<Bgr, Byte> lastFrame;
public Image<Gray, Byte> lastFrameGray;
public bool lastChessboardFound;
public PointF[] lastChessboardCorners;
public Image<Gray, Byte>[] samplesGray;
public PointF[][] samplesChessboardCorners;
public Size cbDimensions;
public Size imageDimensions;
public int cursampleIndex = 0;
public ImageList sampleIcons;
private Image<Gray, Byte> _chessBoardDisplay;
private int _iconWidth = 160;
private int _icnonHeight = 90;
private int _numSamples = 0;
public int numSamples()
{
return _numSamples;
}
public void numSamples(int val)
{
_numSamples = val;
this.samplesGray = new Image<Gray, Byte>[val];
this.samplesChessboardCorners = new PointF[val][];
this.sampleIcons.ImageSize = new Size(_iconWidth, _icnonHeight);
Bitmap tmp = new Bitmap(_iconWidth, _icnonHeight);
this.sampleIcons.Images.Clear();
for (int c = 0; c < _numSamples; c++) this.sampleIcons.Images.Add(tmp);
}
public camData(int camIndex, int capWidth, int capHeight, int pcbWidth, int pcbHeight, int pNumSamples)
{
this.sampleIcons = new ImageList();
try
{
this.capture = new Capture(camIndex);
this.capture.SetCaptureProperty(CAP_PROP.CV_CAP_PROP_FRAME_WIDTH, capWidth);
this.capture.SetCaptureProperty(CAP_PROP.CV_CAP_PROP_FRAME_HEIGHT, capHeight);
}
catch (Exception e)
{
MessageBox.Show(e.Message);
return;
}
this.imageDimensions = new Size(capWidth, capHeight);
this.cbDimensions = new Size(pcbWidth, pcbHeight);
this.numSamples(pNumSamples);
}
public Image<Gray, Byte> captureFrame()
{
this.lastFrame = this.capture.QueryFrame();
this.lastFrameGray = this.lastFrame.Convert<Gray, Byte>();
return this.lastFrameGray;
}
public int captureSample()
{
this.detectChessboard(true); // detectChessboard calls -> captureFrame
if (lastChessboardFound)
{
this.samplesGray[cursampleIndex] = this.lastFrameGray;
this.samplesChessboardCorners[cursampleIndex] = this.lastChessboardCorners;
this.sampleIcons.Images[this.cursampleIndex] = this.lastFrameGray.ToBitmap(_iconWidth, _icnonHeight);
this.cursampleIndex++;
if (this.cursampleIndex >= _numSamples) this.cursampleIndex = 0;
}
return cursampleIndex;
}
public void clearSamples()
{
this.cursampleIndex = 0;
this.numSamples(_numSamples);
}
public Image<Gray, Byte> detectChessboard(bool pDoCapture)
{
if (pDoCapture) this.captureFrame();
this.lastChessboardFound = CameraCalibration.FindChessboardCorners(this.lastFrameGray, this.cbDimensions, CALIB_CB_TYPE.ADAPTIVE_THRESH | CALIB_CB_TYPE.FILTER_QUADS, out this.lastChessboardCorners);
_chessBoardDisplay = this.lastFrameGray.Clone();
CameraCalibration.DrawChessboardCorners(this._chessBoardDisplay, this.cbDimensions, this.lastChessboardCorners, this.lastChessboardFound);
return this._chessBoardDisplay;
}
public void saveSampleImages(string pPath, string pID)
{
for(int ic = 0; ic < this._numSamples; ic++)
{
this.samplesGray[ic].Save(pPath + pID + ic.ToString() + ".bmp");
}
}
public void loadSampleImages(string pPath, string pID)
{
clearSamples();
for (int ic = 0; ic < this._numSamples; ic++)
{
this.lastFrameGray = new Image<Gray, byte>(new Bitmap(pPath + pID + ic.ToString() + ".bmp"));
this.detectChessboard(false);
this.samplesChessboardCorners[ic] = this.lastChessboardCorners;
this.sampleIcons.Images[ic] = this.lastFrameGray.ToBitmap(_iconWidth, _icnonHeight);
this.samplesGray[ic] = this.lastFrameGray;
}
}
}
}
And here's my form code with the rest of the calibration logic:
using System;
using System.Collections.Generic;
using System.ComponentModel;
using System.Data;
using System.Drawing;
using System.Linq;
using System.Text;
using System.Windows.Forms;
using System.Runtime.InteropServices;
using Emgu.CV.Util;
using Emgu.CV;
using Emgu.CV.UI;
using Emgu.CV.CvEnum;
using Emgu.CV.Structure;
using Emgu.CV.VideoSurveillance;
namespace Capture2Cams
{
public partial class CaptureForm : Form
{
private static camData camLeft;
private static camData camRight;
private int _numSamples = 10; // Number of calibration samples
private int _imageWidth = 1280; // web cam resolution
private int _imageHeight = 720; // web cam resolution
private int _cbWidth = 9; // chessboard corner count
private int _cbHeight = 5; // chessboard corner count
// TODO: Test post calibration values, these will need to be loaded and saved
private static Matrix<double> _foundamentalMatrix;
private static Matrix<double> _essentialMatrix;
private static IntrinsicCameraParameters _inPramsLeft;
private static IntrinsicCameraParameters _inPramsRight;
private static ExtrinsicCameraParameters _outExtParamsStereo;
private Matrix<float> _mapxLeft;
private Matrix<float> _mapyLeft;
private Matrix<float> _mapxRight;
private Matrix<float> _mapyRight;
public CaptureForm()
{
InitializeComponent();
Run();
}
void Run()
{
camLeft = new camData(0, _imageWidth, _imageHeight, _cbWidth, _cbHeight, _numSamples);
camRight = new camData(1, _imageWidth, _imageHeight, _cbWidth, _cbHeight, _numSamples);
this.listViewLeft.LargeImageList = camLeft.sampleIcons;
for (int c = 0; c < _numSamples; c++)
{
ListViewItem curItem = new ListViewItem();
curItem.ImageIndex = c;
curItem.Text = "Sample" + c.ToString();
this.listViewLeft.Items.Add(curItem);
}
this.listViewRight.LargeImageList = camRight.sampleIcons;
for (int c = 0; c < _numSamples; c++)
{
ListViewItem curItem = new ListViewItem();
curItem.ImageIndex = c;
curItem.Text = "Sample" + c.ToString();
this.listViewRight.Items.Add(curItem);
}
Application.Idle += ProcessFrame;
}
void ProcessFrame(object sender, EventArgs e)
{
if (!checkBoxRectify.Checked)
{
if (this.checkBoxCapCB.Checked)
{
imageBoxLeft.Image = camLeft.detectChessboard(true);
imageBoxRight.Image = camRight.detectChessboard(true);
}
else
{
imageBoxLeft.Image = camLeft.captureFrame();
imageBoxRight.Image = camRight.captureFrame();
}
}
else
{
camLeft.captureFrame();
camRight.captureFrame();
Image<Gray, byte> imgLeft = camLeft.lastFrameGray.Clone();
Image<Gray, byte> imgRight = camRight.lastFrameGray.Clone();
CvInvoke.cvRemap(camLeft.lastFrameGray.Ptr, imgLeft.Ptr, _mapxLeft.Ptr, _mapyLeft.Ptr, (int)INTER.CV_INTER_LINEAR | (int)WARP.CV_WARP_FILL_OUTLIERS, new MCvScalar(0));
CvInvoke.cvRemap(camRight.lastFrameGray.Ptr, imgRight.Ptr, _mapxRight.Ptr, _mapyRight.Ptr, (int)INTER.CV_INTER_LINEAR | (int)WARP.CV_WARP_FILL_OUTLIERS, new MCvScalar(0));
imageBoxLeft.Image = imgLeft;
imageBoxRight.Image = imgRight;
}
//checkBoxRectify
}
private void buttonCaptureSample_Click(object sender, EventArgs e)
{
camLeft.captureSample();
camRight.captureSample();
this.listViewLeft.Refresh();
this.listViewRight.Refresh();
}
private void buttonStereoCalibrate_Click(object sender, EventArgs e)
{
// We should have most of the data needed from the sampling with the camData objects
int numCorners = _cbWidth * _cbHeight;
// Calc intrisitcs / camera
_inPramsLeft = new IntrinsicCameraParameters();
_inPramsRight = new IntrinsicCameraParameters();
ExtrinsicCameraParameters[] outExtParamsLeft;
ExtrinsicCameraParameters[] outExtParamsRight;
//Matrix<double> foundamentalMatrix;
//Matrix<double> essentialMatrix;
outExtParamsLeft = new ExtrinsicCameraParameters[_numSamples];
outExtParamsRight = new ExtrinsicCameraParameters[_numSamples];
_outExtParamsStereo = new ExtrinsicCameraParameters();
// Building object points
// These are the points on the cessboard in local 3d coordinates
// Requires one set per sample, if the same calibration object (chessboard) is used for each sample then just use the same set of points for each sample
// Also setting sub pixel analasys on samples
MCvPoint3D32f[][] objectPoints = new MCvPoint3D32f[_numSamples][];
for (int sc = 0; sc < _numSamples; sc++) // Samples count
{
// indivual cam setup
outExtParamsLeft[sc] = new ExtrinsicCameraParameters();
outExtParamsRight[sc] = new ExtrinsicCameraParameters();
// Sub pixel analasys
camLeft.samplesGray[sc].FindCornerSubPix(new PointF[][] { camLeft.samplesChessboardCorners[sc] }, new Size(10, 10), new Size(-1, -1), new MCvTermCriteria(300, 0.01));
camRight.samplesGray[sc].FindCornerSubPix(new PointF[][] { camRight.samplesChessboardCorners[sc] }, new Size(10, 10), new Size(-1, -1), new MCvTermCriteria(300, 0.01));
// Object points
objectPoints[sc] = new MCvPoint3D32f[numCorners];
for (int cc = 0; cc < numCorners; cc++) // chessboard corners count
{
objectPoints[sc][cc].x = cc / _cbWidth;
objectPoints[sc][cc].y = cc % _cbWidth;
objectPoints[sc][cc].z = 0.0f;
}
}
Size imageSize = new Size(_imageWidth, _imageHeight);
// Indivual cam camibration
CameraCalibration.CalibrateCamera(objectPoints, camLeft.samplesChessboardCorners, imageSize, _inPramsLeft, CALIB_TYPE.DEFAULT, out outExtParamsLeft);
CameraCalibration.CalibrateCamera(objectPoints, camRight.samplesChessboardCorners, imageSize, _inPramsRight, CALIB_TYPE.DEFAULT, out outExtParamsRight);
// Stereo Cam calibration
CameraCalibration.StereoCalibrate(
objectPoints,
camLeft.samplesChessboardCorners,
camRight.samplesChessboardCorners,
_inPramsLeft,
_inPramsRight,
imageSize,
CALIB_TYPE.CV_CALIB_FIX_ASPECT_RATIO | CALIB_TYPE.CV_CALIB_ZERO_TANGENT_DIST | CALIB_TYPE.CV_CALIB_FIX_FOCAL_LENGTH,
new MCvTermCriteria(100, 0.001),
out _outExtParamsStereo,
out _foundamentalMatrix,
out _essentialMatrix
);
PrintIntrinsic(_inPramsLeft);
PrintIntrinsic(_inPramsRight);
}
private void listViewLeft_ItemSelectionChanged(object sender, ListViewItemSelectionChangedEventArgs e)
{
}
private void listViewRight_ItemSelectionChanged(object sender, ListViewItemSelectionChangedEventArgs e)
{
}
private void buttonSaveSamples_Click(object sender, EventArgs e)
{
camLeft.saveSampleImages(textBoxSavePath.Text, "left");
camRight.saveSampleImages(textBoxSavePath.Text, "right");
}
private void buttonLoadSamples_Click(object sender, EventArgs e)
{
camLeft.loadSampleImages(textBoxSavePath.Text, "left");
camRight.loadSampleImages(textBoxSavePath.Text, "right");
this.listViewLeft.Refresh();
this.listViewRight.Refresh();
}
private void buttonCapture_Click(object sender, EventArgs e)
{
}
private void buttonCaptureCurframe_Click(object sender, EventArgs e)
{
camLeft.captureFrame();
camRight.captureFrame();
camLeft.lastFrame.Save(textBoxSavePath.Text + "frameLeft" + ".bmp");
camLeft.lastFrameGray.Save(textBoxSavePath.Text + "frameLeftGray" + ".bmp");
camRight.lastFrame.Save(textBoxSavePath.Text + "frameRight" + ".bmp");
camRight.lastFrameGray.Save(textBoxSavePath.Text + "frameRightGray" + ".bmp");
}
public void StereoRectify(
IntrinsicCameraParameters intrinsicParam1,
IntrinsicCameraParameters intrinsicParam2,
Size imageSize,
ExtrinsicCameraParameters extrinsicParams,
out Matrix<double> R1,
out Matrix<double> R2,
out Matrix<double> P1,
out Matrix<double> P2,
out Matrix<double> Q,
STEREO_RECTIFY_TYPE flags,
double alpha,
Size newImageSize,
ref Rectangle validPixROI1,
ref Rectangle validPixROI2
)
{
R1 = new Matrix<double>(3, 3);
R2 = new Matrix<double>(3, 3);
P1 = new Matrix<double>(3, 4);
P2 = new Matrix<double>(3, 4);
Q = new Matrix<double>(4, 4);
CvInvoke.cvStereoRectify(
_inPramsLeft.IntrinsicMatrix.Ptr,
_inPramsRight.IntrinsicMatrix.Ptr,
_inPramsLeft.DistortionCoeffs.Ptr,
_inPramsRight.DistortionCoeffs.Ptr,
imageSize,
extrinsicParams.RotationVector.Ptr,
extrinsicParams.TranslationVector.Ptr,
R1.Ptr,
R2.Ptr,
P1.Ptr,
P2.Ptr,
Q.Ptr,
STEREO_RECTIFY_TYPE.DEFAULT,
alpha,
newImageSize,
ref validPixROI1,
ref validPixROI1);
}
public void InitUndistortRectifyMap(
IntrinsicCameraParameters intrinsicParam,
Matrix<double> R,
Matrix<double> newCameraMatrix,
out Matrix<float> mapx,
out Matrix<float> mapy
)
{
mapx = new Matrix<float>(new Size(_imageWidth, _imageHeight));
mapy = new Matrix<float>(new Size(_imageWidth, _imageHeight));
CvInvoke.cvInitUndistortRectifyMap(intrinsicParam.IntrinsicMatrix.Ptr, intrinsicParam.DistortionCoeffs.Ptr, R.Ptr, newCameraMatrix.Ptr, mapx.Ptr, mapy.Ptr);
}
private void buttonTestCalc_Click(object sender, EventArgs e)
{
// Stereo Rectify images
Matrix<double> R1;
Matrix<double> R2;
Matrix<double> P1;
Matrix<double> P2;
Matrix<double> Q;
Rectangle validPixROI1, validPixROI2;
validPixROI1 = new Rectangle();
validPixROI2 = new Rectangle();
StereoRectify(_inPramsLeft, _inPramsRight, new Size(_imageWidth, _imageHeight), _outExtParamsStereo, out R1, out R2, out P1, out P2, out Q, 0, 0, new Size(_imageWidth, _imageHeight), ref validPixROI1, ref validPixROI2);
//InitUndistortRectifyMap(_inPramsLeft, R1, P1, out _mapxLeft, out _mapyLeft);
//InitUndistortRectifyMap(_inPramsRight, R2, P2, out _mapxRight, out _mapyRight);
_inPramsLeft.InitUndistortMap(_imageWidth, _imageHeight, out _mapxLeft, out _mapyLeft);
_inPramsRight.InitUndistortMap(_imageWidth, _imageHeight, out _mapxRight, out _mapyRight);
Image<Gray, byte> imgLeft = camLeft.lastFrameGray.Clone();
Image<Gray, byte> imgRight = camRight.lastFrameGray.Clone();
// **** THIS IS WHERE IM UP TO, no errors, it just hangs ****
CvInvoke.cvRemap(camLeft.lastFrameGray.Ptr, imgLeft.Ptr, _mapxLeft.Ptr, _mapyLeft.Ptr, (int)INTER.CV_INTER_LINEAR | (int)WARP.CV_WARP_FILL_OUTLIERS, new MCvScalar(0));
// StereoBM stereoSolver = new StereoBM(Emgu.CV.CvEnum.STEREO_BM_TYPE.BASIC, 0);
//stereoSolver.FindStereoCorrespondence(
}
public void PrintIntrinsic(IntrinsicCameraParameters CamIntrinsic)
{
// Prints the Intrinsic camera parameters to the command line
Console.WriteLine("Intrinsic Matrix:");
string outStr = "";
int i = 0;
int j = 0;
for (i = 0; i < CamIntrinsic.IntrinsicMatrix.Height; i++)
{
for (j = 0; j < CamIntrinsic.IntrinsicMatrix.Width; j++)
{
outStr = outStr + CamIntrinsic.IntrinsicMatrix.Data[i, j].ToString();
outStr = outStr + " ";
}
Console.WriteLine(outStr);
outStr = "";
}
Console.WriteLine("Distortion Coefficients: ");
outStr = "";
for (j = 0; j < CamIntrinsic.DistortionCoeffs.Height; j++)
{
outStr = outStr + CamIntrinsic.DistortionCoeffs.Data[j, 0].ToString();
outStr = outStr + " ";
}
Console.WriteLine(outStr);
}
public void PrintExtrinsic(ExtrinsicCameraParameters CamExtrinsic)
{
// Prints the Extrinsic camera parameters to the command line
Console.WriteLine("Extrinsic Matrix:");
string outStr = "";
int i = 0;
int j = 0;
for (i = 0; i < CamExtrinsic.ExtrinsicMatrix.Height; i++)
{
for (j = 0; j < CamExtrinsic.ExtrinsicMatrix.Width; j++)
{
outStr = outStr + CamExtrinsic.ExtrinsicMatrix.Data[i, j].ToString();
outStr = outStr + " ";
}
Console.WriteLine(outStr);
outStr = "";
}
Console.WriteLine("Rotation Vector: ");
outStr = "";
for (i = 0; i < CamExtrinsic.RotationVector.Height; i++)
{
for (j = 0; j < CamExtrinsic.RotationVector.Width; j++)
{
outStr = outStr + CamExtrinsic.RotationVector.Data[i, j].ToString();
outStr = outStr + " ";
}
Console.WriteLine(outStr);
outStr = "";
}
Console.WriteLine("Translation Vector: ");
outStr = "";
for (i = 0; i < CamExtrinsic.TranslationVector.Height; i++)
{
for (j = 0; j < CamExtrinsic.TranslationVector.Width; j++)
{
outStr = outStr + CamExtrinsic.TranslationVector.Data[i, j].ToString();
outStr = outStr + " ";
}
Console.WriteLine(outStr);
outStr = "";
}
}
}
}
TNKS!
Your maps must be images instead of matrices.
Specifically, of "Gray, float" type.

Categories

Resources