Currently, I am facing problem with my EmguCV c# code. I am trying to recognize my images from the database, but its not working. Once my face is detected, its crashing and then this error shows up
Additional information: OpenCV: Different sizes of objects.
I tried searching for this error but I am clueless.
This is my code:
//Action for each element detected
foreach (MCvAvgComp f in facesDetected[0])
{
t = t + 1;
result = currentFrame.Copy(f.rect).Convert<Gray, byte>().Resize(100, 100, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
//draw the face detected in the 0th (gray) channel with blue color
currentFrame.Draw(f.rect, new Bgr(Color.Green), 2);
//Database select the image row and pass to the eigenobjectrecognizer
//ConnectToDatabase();
if (Connection.State.Equals(ConnectionState.Open))
{
Connection.Close();
TSTable.Clear();
ConnectToDatabase();
}
//Connection.Open();
OleDbCommand OledbSelect = new OleDbCommand("Select FaceName, FaceImage From TrainingSet1",Connection);
OleDbDataReader reader = OledbSelect.ExecuteReader();
while (reader.Read())
{
labels.Add(reader.GetValue(1).ToString());
trainingImages.Add(gray);
}
if (TSTable.Rows.Count != 0)
{
//TermCriteria for face recognition with numbers of trained images like maxIteration
MCvTermCriteria termCrit = new MCvTermCriteria(ContTrain, 0.001);
//Eigen face recognizer
EigenObjectRecognizer recognizer = new EigenObjectRecognizer(
trainingImages.ToArray(), //database faceimage list
labels.ToArray(), //facename list
3000,
ref termCrit);
name = recognizer.Recognize(result);
//Draw the label for each face detected and recognized
currentFrame.Draw(name, ref font, new Point(f.rect.X - 2, f.rect.Y - 2), new Bgr(Color.LightGreen));
}
I currently still new to EmguCV and C#. So some of the exceptions I don't understand. Can anyone help me with this.
And once the code breaks, it goes to the EigenObjectRecognizer.cs. This is the code where it breaks:
public static float[] EigenDecomposite(Image<Gray, Byte> src, Image<Gray, Single>[] eigenImages, Image<Gray, Single> avg)
{
return CvInvoke.cvEigenDecomposite(
src.Ptr,
Array.ConvertAll<Image<Gray, Single>, IntPtr>(eigenImages, delegate(Image<Gray, Single> img) { return img.Ptr; }),
avg.Ptr);
}
I guess, problem is in trainingImages. The width and height of each eigenImages must be the same as the width and height of the input image.
Your result image has size 100x100. So, all your trainingImage should have same size. Resize them, before adding to your list.
I review your code. Trouble still in sizes.
Line 305 replace to :
result = currentFrame.Copy(f.rect).Convert<Gray, byte>().Resize(gray.Width, gray.Height, Emgu.CV.CvEnum.INTER.CV_INTER_CUBIC);
Related
I need to compare just two faces if it is for the same person or not ...
I convert this project Face detection and recognition in runtime to compare two faces but the method always return true .
int ImagesCount = 0;
CascadeClassifier faceDetector = new CascadeClassifier("haarcascade_frontalface_alt.xml");
List<Mat> TrainedFaces = new List<Mat>();
List<int> PersonsLabes = new List<int>();
Mat image1 = img1.ToImage<Gray, byte>().Mat;
Mat image1Temp = img1.ToImage<Bgr, byte>().Mat;
foreach (Rectangle face in faceDetector.DetectMultiScale(image1, 1.2, 10, new Size(50, 50), Size.Empty))
{
Image<Gray, byte> trainedImage = ImageClass.CropImage(image1.ToBitmap(), face).ToImage<Gray, byte>().Resize(200, 200, Inter.Cubic);
CvInvoke.EqualizeHist(trainedImage, trainedImage);
TrainedFaces.Add(trainedImage.Mat);
PersonsLabes.Add(ImagesCount);
ImagesCount++;
}
EigenFaceRecognizer recognizer = new EigenFaceRecognizer(ImagesCount, 2000);
recognizer.Train(TrainedFaces.ToArray(), PersonsLabes.ToArray());
Mat image2 = img2.ToImage<Gray, byte>().Mat;
Rectangle[] rect = faceDetector.DetectMultiScale(image2, 1.2, 10, new Size(50, 50), Size.Empty);
if (rect.Length == 1)
{
Image<Gray, Byte> grayFaceResult = ImageClass.CropImage(image2.ToBitmap(), rect[0]).ToImage<Gray, byte>().Resize(200, 200, Inter.Cubic);
CvInvoke.EqualizeHist(grayFaceResult, grayFaceResult);
var result = recognizer.Predict(grayFaceResult);
if (result.Label != -1 && result.Distance < 2000)
{
return true;
}
}
return false;
Note: The first image may contain more than one picture of the same person and the second image should always contain one picture of the another or same person but always give me 0 ( Always return true Although I tried two pictures of two different people ) and I used emguCv 4.3
I searched a lot but I didn't found any thing can resolve me problem
Is there anyone who can know my mistake in this code or can give me a link for another solution for compare two faces ?
(Note: I am new to this field)
If you can deploy a python application on your server, you might adopt deepface. It has a verify function and you should send the base64 encoded images as inputs to those functions.
Endpoint: http://127.0.0.1:5000/verify
Body:
{
"model_name": "VGG-Face",
"img": [
{
"img1": "data:image/jpeg;base64,..."
, "img2": "data:image/jpeg;base64,..."
}
]
}
I'm programming in WPF(C#) using emgucv-windesktop 3.1.0.2282. I'm new in image processing and I want to use FFT and DFT in my image processing application. here is my code:
Image<Gray, System.Single> _image = new Image<Gray, System.Single>(
Util.ImageSourceToBitmap(img1.Source));
UMat DFTimage = new UMat();
UMat Original = new UMat();
CvInvoke.Dft(_image.ToUMat(), DFTimage, Emgu.CV.CvEnum.DxtType.Forward , -1);
CvInvoke.Dft(DFTimage, Original, Emgu.CV.CvEnum.DxtType.Inverse, -1);
img2.Source = Util.BitmapToImageSource(Original.Bitmap);
img3.Source = Util.BitmapToImageSource(DFTimage.Bitmap);
I used a human face, flower, etc but what I see in img2 and img3 (they are Image controls) is black and empty images. What is wrong in my code?
UPDATE 1:
Now I'm using this code:
Bitmap bm = Util.ImageSourceToBitmap(img1.Source);
Image<Gray, Single> image = new Image<Gray, Single>(bm);
UMat DFTimage = new UMat(image.Size, Emgu.CV.CvEnum.DepthType.Cv32F, 2);
UMat Original = new UMat(image.Size, Emgu.CV.CvEnum.DepthType.Cv32F, 2);
CvInvoke.Dft(image, DFTimage, Emgu.CV.CvEnum.DxtType.Forward, image.Rows);
CvInvoke.Dft(Original, DFTimage, Emgu.CV.CvEnum.DxtType.Inverse, image.Rows);
Util.BitmapToImageSource(Original.Bitmap);
Now I get System.Exception exception in showing my inverse image:
An unhandled exception of type 'System.Exception' occurred in
Emgu.CV.World.dll
Additional information: Unknown color type
Try changing this:
CvInvoke.Dft(Original, DFTimage, Emgu.CV.CvEnum.DxtType.Inverse, image.Rows);
to this:
CvInvoke.Dft(DFTimage, Original, Emgu.CV.CvEnum.DxtType.Inverse, image.Rows);
I got a kinect mount in the ceiling pointing down to a table (that is roughly 2 to 3 meters of the kinect) and my objective is using the kinect depth stream, locate the objects and get their position to late send to unity. So for that i am using c sharp and opencv (emgu wrapper), i first use canny to get the edges and then use BoundingRect to create a box around the object.
The result is the as follows:
Original:
As you can see the stationary objects are no problem but the box around the hands (the first was a flyer on it) is to big and sometimes is even worse. Is there any other way (using opencv ) of getting the positions of the objects?
The objective is then to send the position and dimension (of the object) to unity (trough tcp/ip), so preferably the shapes have to squares or rectangles to easy manipulation on unity.
The code I have so far is:
Image<Bgr, Byte> grayImage = new Image<Bgr, Byte>("C:\\Users\\Pedro\\Desktop\\imgRva\\KinectSnapshot-03-48-01.png");
Image<Gray, Byte> gray = grayImage.Convert<Gray, Byte>().PyrDown().PyrUp();
CvInvoke.cvShowImage("texto", gray);
CvInvoke.cvWaitKey(0);
Image<Gray, Byte> bin = gray.ThresholdBinary(new Gray(40), new Gray(255));
Image<Gray, Byte> cannyEdges = bin.Canny(300, 300);
CvInvoke.cvShowImage("texto", cannyEdges);
CvInvoke.cvWaitKey(0);
Image<Gray, Byte> canny = new Image<Gray, byte>(cannyEdges.Size);
using (MemStorage storage = new MemStorage())
for (Contour<Point> contours = cannyEdges.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, storage); contours != null; contours = contours.HNext)
{
CvInvoke.cvDrawContours(canny, contours, new MCvScalar(255), new MCvScalar(255), -1, 1, Emgu.CV.CvEnum.LINE_TYPE.EIGHT_CONNECTED, new Point(0, 0));
}
using (MemStorage store = new MemStorage())
for (Contour<Point> contours1 = cannyEdges.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours1 != null; contours1 = contours1.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours1, 1);
canny.Draw(r, new Gray(255), 1);
Debug.WriteLine(r.Location + " x " + r.Width + " y " + r.Height);
}
CvInvoke.cvShowImage("texto", canny);
CvInvoke.cvWaitKey(0);
Ty
HI Im using the code for face detection. but not im going to continue with face recognition. But im get stack here where, how for the next step. However, im using the emgu version 2.2
if (faces.Length > 0)
{
foreach (var face in faces)
{
ImageFrame.Draw(face.rect, new Bgr(Color.Green), 2);
//Extract face
ExtractedFace = new Bitmap(face.rect.Width, face.rect.Height);
FaceConvas = Graphics.FromImage(ExtractedFace);
FaceConvas.DrawImage(GrayBmpInput, 0, 0, face.rect, GraphicsUnit.Pixel);
ExtcFacesArr[faceNo] = ExtractedFace;
faceNo++;
}
faceNo = 0;
picExtcFaces.Image = ExtcFacesArr[faceNo];
CamImageBox.Image = ImageFrame;
}
}
Where should i continue with the face recognition and do have any good reference online in C# code?
You code is almost correct, but i think you do not have idea what to do next.I am doing face recognition in one of my app for showing a mask on face.I am doing like this.
Image mask = Image.FromFile("mask.png");
public Bitmap getFacedBitmap(Bitmap bbb)
{
using (Image<Bgr, byte> nextFrame = new Image<Bgr, byte>(bbb))
{
if (nextFrame != null)
{
// there's only one channel (greyscale), hence the zero index
//var faces = nextFrame.DetectHaarCascade(haar)[0];
Image<Gray, byte> grayframe = nextFrame.Convert<Gray, byte>();
//Image<Gray, Byte> gray = nextFrame.Convert<Gray, Byte>();
var faces = grayframe.DetectHaarCascade(haar, 1.3, 2, HAAR_DETECTION_TYPE.SCALE_IMAGE, new Size(nextFrame.Width / 8, nextFrame.Height / 8))[0];
if (faces.Length > 0)
{
foreach (var face in faces)
{
//ImageFrame.Draw(face.rect, new Bgr(Color.Green), 2);
//
using(Graphics g = Graphics.FromImage(bbb))
{
g.DrawImage(mask,face.rect);
g.Save()
}
}
}
}
}
retun bbb;
}
How can I fill the holes in binary image in emgu cv?
In Aforge.net it's easy, use Fillholes class.
Thought the question is a little bit old, I'd like to contribute an alternative solution to the problem.
You can obtain the same result as Chris' without memory problem if you use the following:
private Image<Gray,byte> FillHoles(Image<Gray,byte> image)
{
var resultImage = image.CopyBlank();
Gray gray = new Gray(255);
using (var mem = new MemStorage())
{
for (var contour = image.FindContours(
CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_CCOMP,
mem); contour!= null; contour = contour.HNext)
{
resultImage.Draw(contour, gray, -1);
}
}
return resultImage;
}
The good thing about the method above is that you can selectively fill holes that meets your criteria. For example, you may want to fill holes whose pixel count (count of black pixels inside the blob) is below 50, etc.
private Image<Gray,byte> FillHoles(Image<Gray,byte> image, int minArea, int maxArea)
{
var resultImage = image.CopyBlank();
Gray gray = new Gray(255);
using (var mem = new MemStorage())
{
for (var contour = image.FindContours(
CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_CCOMP,
mem); contour!= null; contour = contour.HNext)
{
if ( (contour.Area < maxArea) && (contour.Area > minArea) )
resultImage.Draw(contour, gray, -1);
}
}
return resultImage;
}
Yes there is a method but it's a bit messy as its based on cvFloodFill operation. Now all this algorithm is designed to do is fill an area with a colour until it reaches an edge similar to a region growing algorithm. To use this effectively you need to use a little inventive coding but I warn you this code is only to get you started it may require re-factoring to speed things up . As it stands the loop goes through each of your pixels that are less then 255 applies cvFloodFill checks what size the area is and then if it is under a certain area fill it in.
It is important to note that a copy of the image is made of the original image to be supplied to the cvFloodFill operation as a pointer is used. If the direct image is supplied then you will end up with a white image.
OpenFileDialog OpenFile = new OpenFileDialog();
if (OpenFileDialog.ShowDialog() == DialogResult.OK)
{
Image<Bgr, byte> image = new Image<Bgr, byte>(OpenFile.FileName);
for (int i = 0; i < image.Width; i++)
{
for (int j = 0; j < image.Height; j++)
{
if (image.Data[j, i, 0] != 255)
{
Image<Bgr, byte> image_copy = image.Copy();
Image<Gray, byte> mask = new Image<Gray, byte>(image.Width + 2, image.Height + 2);
MCvConnectedComp comp = new MCvConnectedComp();
Point point1 = new Point(i, j);
//CvInvoke.cvFloodFill(
CvInvoke.cvFloodFill(image_copy.Ptr, point1, new MCvScalar(255, 255, 255, 255),
new MCvScalar(0, 0, 0),
new MCvScalar(0, 0, 0), out comp,
Emgu.CV.CvEnum.CONNECTIVITY.EIGHT_CONNECTED,
Emgu.CV.CvEnum.FLOODFILL_FLAG.DEFAULT, mask.Ptr);
if (comp.area < 10000)
{
image = image_copy.Copy();
}
}
}
}
}
The "new MCvScalar(0, 0, 0), new MCvScalar(0, 0, 0)," are not really important in this case as you are only filling in results of a binary image. YOu could play around with other settings to see what results you can achieve. "if (comp.area < 10000)" is the key constant to change is you want to change what size hole the method will fill.
These are the results that you can expect:
Original
Results
The problem with this method is it's extremely memory intensive and it managed to eat up 6GB of ram on a 200x200 image and when I tried 200x300 it ate all 8GB of my RAM and brought everything to a crashing halt. Unless a majority of your image is white and you want to fill in tiny gaps or you can minimise where you apply the method I would avoid it. I would suggest writing you own class to examine each pixel that is not 255 and add the number of pixels surrounding it. You can then record the position of each pixel that was not 255 (in a simple list) and if your count was bellow a threshold set these positions to 255 in your images (by iterating though the list).
I would stick with the Aforge FillHoles class if you do not wish to write your own as it is designed for this purpose.
Cheers
Chris
you can use FillConvexPoly
image.FillConvexPoly(externalContours.ToArray(), new Gray(255));