Using Seq in emgu c# - c#

I have a self-defined struct of a point (as opposed to System.Drawing.Point):
struct PointD
{
public double X,Y;
}
I want to get a Seq of points, and from there extract the minimum area rectangle:
using (MemStorage stor = new MemStorage())
{
Seq<PointD> seq = new Seq<PointD>(CvInvoke.CV_MAKETYPE(6, 2), stor);
seq.Push(new PointD(0.5, 0));
seq.Push(new PointD(1.0, 0));
seq.Push(new PointD(0, 1.0));
seq.Push(new PointD(1.0, 1.0));
var output = seq.GetMinAreaRect();
}
However, this code throws an exception at GetMinAreaRect(), saying that the input sequence must be 2d points. My question is there a way to get my data format through correctly? I was thinking that I just lack a bit of something, as this code can work for a System.Drawing.Point.

I assume following would work:
int[] pts = {
new PointF(0.5, 0),
new PointF(1.0, 0),
new PointF(0, 1.0),
new PointF(1.0, 1.0)
};
MCvBox2D box = PointCollection.MinAreaRect(pts);
Additional example : http://www.emgu.com/wiki/index.php/Minimum_Area_Rectangle_in_CSharp

Related

Is there any way to compare two faces using emguCV in C#?

I need to compare just two faces if it is for the same person or not ...
I convert this project Face detection and recognition in runtime to compare two faces but the method always return true .
int ImagesCount = 0;
CascadeClassifier faceDetector = new CascadeClassifier("haarcascade_frontalface_alt.xml");
List<Mat> TrainedFaces = new List<Mat>();
List<int> PersonsLabes = new List<int>();
Mat image1 = img1.ToImage<Gray, byte>().Mat;
Mat image1Temp = img1.ToImage<Bgr, byte>().Mat;
foreach (Rectangle face in faceDetector.DetectMultiScale(image1, 1.2, 10, new Size(50, 50), Size.Empty))
{
Image<Gray, byte> trainedImage = ImageClass.CropImage(image1.ToBitmap(), face).ToImage<Gray, byte>().Resize(200, 200, Inter.Cubic);
CvInvoke.EqualizeHist(trainedImage, trainedImage);
TrainedFaces.Add(trainedImage.Mat);
PersonsLabes.Add(ImagesCount);
ImagesCount++;
}
EigenFaceRecognizer recognizer = new EigenFaceRecognizer(ImagesCount, 2000);
recognizer.Train(TrainedFaces.ToArray(), PersonsLabes.ToArray());
Mat image2 = img2.ToImage<Gray, byte>().Mat;
Rectangle[] rect = faceDetector.DetectMultiScale(image2, 1.2, 10, new Size(50, 50), Size.Empty);
if (rect.Length == 1)
{
Image<Gray, Byte> grayFaceResult = ImageClass.CropImage(image2.ToBitmap(), rect[0]).ToImage<Gray, byte>().Resize(200, 200, Inter.Cubic);
CvInvoke.EqualizeHist(grayFaceResult, grayFaceResult);
var result = recognizer.Predict(grayFaceResult);
if (result.Label != -1 && result.Distance < 2000)
{
return true;
}
}
return false;
Note: The first image may contain more than one picture of the same person and the second image should always contain one picture of the another or same person but always give me 0 ( Always return true Although I tried two pictures of two different people ) and I used emguCv 4.3
I searched a lot but I didn't found any thing can resolve me problem
Is there anyone who can know my mistake in this code or can give me a link for another solution for compare two faces ?
(Note: I am new to this field)
If you can deploy a python application on your server, you might adopt deepface. It has a verify function and you should send the base64 encoded images as inputs to those functions.
Endpoint: http://127.0.0.1:5000/verify
Body:
{
"model_name": "VGG-Face",
"img": [
{
"img1": "data:image/jpeg;base64,..."
, "img2": "data:image/jpeg;base64,..."
}
]
}

C# Text-extracting with OpenCV finds only one line in an image

As result, only just the last line was boxed as seen above.
The code starts from this post below.
Extracting text OpenCV
First, I start it from Diomedes Domínguez's C# code.
I modified only one line because original code throws here.
var mask = new Mat(Mat.Zeros(bw.Size(), MatType.CV_8UC1));
System.ArgumentException: 'empty ranges
parameter: ranges'
So I added range.
var mask = new Mat(Mat.Zeros(bw.Size(), MatType.CV_8UC1), new Rect(0, 0, bw.Size().Width, bw.Size().Height));
So here is code what I've tried.
Mat large = new Mat(#"D:\cap_price.PNG");
Mat rgb = new Mat(), small = new Mat(), grad = new Mat(), bw = new Mat(), connected = new Mat();
// downsample and use it for processing
Cv2.PyrDown(large, rgb);
Cv2.CvtColor(rgb, small, ColorConversionCodes.BGR2GRAY);
// morphological gradient
var morphKernel = Cv2.GetStructuringElement(MorphShapes.Ellipse, new OpenCvSharp.Size(3, 3));
Cv2.MorphologyEx(small, grad, MorphTypes.Gradient, morphKernel);
// binarize
Cv2.Threshold(grad, bw, 0, 255, ThresholdTypes.Binary | ThresholdTypes.Otsu);
// connect horizontally oriented regions
morphKernel = Cv2.GetStructuringElement(MorphShapes.Rect, new OpenCvSharp.Size(9, 1));
Cv2.MorphologyEx(bw, connected, MorphTypes.Close, morphKernel);
// find contours
var mask = new Mat(Mat.Zeros(bw.Size(), MatType.CV_8UC1));
Cv2.FindContours(connected, out OpenCvSharp.Point[][] contours, out HierarchyIndex[] hierarchy, RetrievalModes.CComp, ContourApproximationModes.ApproxSimple, new OpenCvSharp.Point(0, 0));
// filter contours
var idx = 0;
foreach (var hierarchyItem in hierarchy)
{
OpenCvSharp.Rect rect = Cv2.BoundingRect(contours[idx]);
var maskROI = new Mat(mask, rect);
maskROI.SetTo(new Scalar(0, 0, 0));
// fill the contour
Cv2.DrawContours(mask, contours, idx, Scalar.White, -1);
// ratio of non-zero pixels in the filled region
double r = (double)Cv2.CountNonZero(maskROI) / (rect.Width * rect.Height);
if (r > .45 /* assume at least 45% of the area is filled if it contains text */
&&
(rect.Height > 8 && rect.Width > 8) /* constraints on region size */
/* these two conditions alone are not very robust. better to use something
like the number of significant peaks in a horizontal projection as a third condition */
)
{
Cv2.Rectangle(rgb, rect, new Scalar(0, 255, 0), 1);
}
}
//rgb.SaveImage(Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "rgb.jpg"));
rgb.SaveImage(#"D:\rgb.jpg");
I want each text line to be boxed.

Calculate area of polygon having WGS coordinates using DotSpatial?

Calculating area:
var coordinates = new List<Coordinate> {
new Coordinate(55, 35),
new Coordinate(55, 35.1),
new Coordinate(55.1, 35.1),
new Coordinate(55.1, 35),
};
Console.WriteLine(new Polygon(coordinates).Area); // ~0.01
Calculation is right, because it's happen in orthogonal coordinate system.
But how to mark that coordinates are in WGS?
It seems that task is more complicated that I've expected. I found this useful discussion on google groups
Firstly we need to found projection system, that is most suitable for our region where we need to compute area. For example you can take one of UTM zones
using DotSpatial.Projections;
using DotSpatial.Topology;
public static double CalculateArea(IEnumerable<double> latLonPoints)
{
// source projection is WGS1984
var projFrom = KnownCoordinateSystems.Geographic.World.WGS1984;
// most complicated problem - you have to find most suitable projection
var projTo = KnownCoordinateSystems.Projected.UtmWgs1984.WGS1984UTMZone37N;
// prepare for ReprojectPoints (it mutates array)
var z = new double[latLonPoints.Count() / 2];
var pointsArray = latLonPoints.ToArray();
Reproject.ReprojectPoints(pointsArray, z, projFrom, projTo, 0, pointsArray.Length / 2);
// assemblying new points array to create polygon
var points = new List<Coordinate>(pointsArray.Length / 2);
for (int i = 0; i < pointsArray.Length / 2; i++)
points.Add(new Coordinate(pointsArray[i * 2], pointsArray[i * 2 + 1]));
var poly = new Polygon(points);
return poly.Area;
}
You can get the area directly from IGeometry or from Feature.Geometry. Also You need to repeat the first coordinate to close your polygon.
FeatureSet fs = new FeatureSet(FeatureType.Polygon);
Coordinate[] coord = new Coordinate[]
{
new Coordinate(55, 35),
new Coordinate(55, 35.1),
new Coordinate(55.1, 35.1),
new Coordinate(55.1, 35),
new Coordinate(55, 35)
};
fs.AddFeature(new Polygon(new LinearRing(coord)));
var area = fs.Features.First().Geometry.Area;

SURF features in EmguCv: how to extract a fixed number of features

I want to train a neural network in order to classify different classes of grayscale images.
As input of this network, I want to use the features extracted by the SURF-128 algorithm. The following code (a semplification of the example provided with EmguCV library) shows how I use the API:
SURFDetector surfCPU = new SURFDetector(500, true);
VectorOfKeyPoint observedKeyPoints;
BriefDescriptorExtractor descriptor = new BriefDescriptorExtractor();
observedKeyPoints = surfCPU.DetectKeyPointsRaw(img, null);
Matrix<Byte> observedDescriptors = descriptor.ComputeDescriptorsRaw(img, null, observedKeyPoints);
By using the following code:
observedDescriptors.Save(#"SURF.bmp");
I can save some results. The following image shows that the above code extracts features with different sizes (on the right, there are the results saved with the previous line of code):
What I want is to obtain a vector with a fixed size.
How can I transform a generic grayscale image in a 128-dimensional array, using the API provided by the EmguCV library for C#?
Problem solved.
In order to obtain a 128-dimensional array that describes a grayscale image, in which are stored features relating to a fixed key point (e.g., the center of image), I used the following code:
SURFDetector surfCPU = new SURFDetector(400, true);
float x = 30, y = 50; //KeyPoint position
float kpSize = 20; //KeyPoint size
MKeyPoint[] keyPoints = new MKeyPoint[1];
keyPoints[0] = newMKeyPoint(x, y, kpSize); //This method is written below
ImageFeature<float>[] features = surfCPU.ComputeDescriptors<float>(img, null, keyPoints);
float[] array_of_128_elements = features[0].Descriptor;
private static MKeyPoint newMKeyPoint(float x, float y, float size)
{
MKeyPoint res = new MKeyPoint();
res.Size = size;
res.Point = new PointF(x, y);
//res.Octave = 0;
//res.Angle = -1;
//res.Response = 0;
//res.ClassId = -1;
return res;
}

WPF 3D - Apply a transformation, change the underlying object values

I've got a method that transforms a number of cylinders. If I run the method a second time it transforms the cylinders from their original position rather than their new position.
Is there anyway of 'applying' the transformation so that it changes the underlying values of the cylinders so that I can re-transform from the new values?
Can anyone assist?
Cheers,
Andy
void TransformCylinders(double angle)
{
var rotateTransform3D = new RotateTransform3D { CenterX = 0, CenterY = 0, CenterZ = 0 };
var axisAngleRotation3D = new AxisAngleRotation3D { Axis = new Vector3D(1, 1, 1), Angle = angle };
rotateTransform3D.Rotation = axisAngleRotation3D;
var myTransform3DGroup = new Transform3DGroup();
myTransform3DGroup.Children.Add(rotateTransform3D);
_cylinders.ForEach(x => x.Transform = myTransform3DGroup);
}
You are remaking the Transform3DGroup every time the method is called:
var myTransform3DGroup = new Transform3DGroup();
Transforms are essentially a stack of matrices that get multiplied together. You are clearing that stack every time you make a new group. You need to add consecutive transforms to the existing group rather than remake it.

Categories

Resources