SkiaSharp Calc new point coordinates after applying 3d rotation - c#

I am using a matrix to translate then rotate in 3d (x, y, z) using the xRotate, yRotate, zRotate, depth == 300 vars.
using (var bmp = new SKBitmap(800, 600))
using (var canvas = new SKCanvas(bmp))
using (var paint = new SKPaint())
{
canvas.Clear(SKColors.White);
paint.IsAntialias = true;
// Find center of canvas
var info = bmp.Info;
float xCenter = info.Width / 2;
float yCenter = info.Height / 2;
// Translate center to origin
SKMatrix matrix = SKMatrix.MakeTranslation(-xCenter, -yCenter);
// Use 3D matrix for 3D rotations and perspective
SKMatrix44 matrix44 = SKMatrix44.CreateIdentity();
matrix44.PostConcat(SKMatrix44.CreateRotationDegrees(1, 0, 0, xRotate));
matrix44.PostConcat(SKMatrix44.CreateRotationDegrees(0, 1, 0, yRotate));
matrix44.PostConcat(SKMatrix44.CreateRotationDegrees(0, 0, 1, zRotate));
SKMatrix44 perspectiveMatrix = SKMatrix44.CreateIdentity();
perspectiveMatrix[3, 2] = -1 / depth;
matrix44.PostConcat(perspectiveMatrix);
// Concatenate with 2D matrix
SKMatrix.PostConcat(ref matrix, matrix44.Matrix);
// Translate back to center
SKMatrix.PostConcat(ref matrix,
SKMatrix.MakeTranslation(xCenter, yCenter));
// Set the matrix and display the bitmap
canvas.SetMatrix(matrix);
canvas.DrawBitmap(currentImage, 50, 25, paint);
pictureBox1.Image = bmp.ToBitmap();
}
If I have some Point in the original currentImage, I want to calculate its new location after drawing the transformed image. How can I do that? Would I reuse the matrix to calculate it?

Found the answer. Let the point be (1, 2) in the currentImage. Then simply:
var newPoint = matrix.MapPoint(1, 2);
newPoint =new SkPoint(50 + newPoint.X, 25 + newPoint.Y); // + offsets of DrawImage
Or to draw on a canvas that already mapped using canvas.SetMatrix
var newPoint = new SKPoint(1, 2);
canvas.DrawCircle(newPoint.X + 50, newPoint.Y + 25, 7, paint); // + offsets of DrawImage

Related

Properly set affine matrix and draw it in SkiaSharp

I am trying to scale and skew a bitmap in SkiaSharp with an affine matrix, however; the results always cut part of the resulting bitmap. I am also not sure if my affine matrix has the correct values.
Here is a diagram of what I am trying to accomplish: on the left is the original image. It has a bitmap size of (178x242). On the right is the scaled and skewed image. The bounding box is (273x366), I also know that the the x scale has been skewed -10 pixels and the y scale has been skewed 7 pixels.
Here if my code for applying the affine matrix:
public SKBitmap ApplyAffine(SKBitmap origBitmap, SKSizeI newSize, SKPointI xyRotation)
{
var skewX = 1f / xyRotation.X;
var skewY = 1f / xyRotation.Y;
// Scale transform
var scaleX = (newSize.Width / (float)origBitmap.Width);
var scaleY = (newSize.Height / (float)origBitmap.Height);
// Affine transform
SKMatrix affine = new SKMatrix
{
ScaleX = scaleX,
SkewY = skewY,
SkewX = skewX,
ScaleY = scaleY,
TransX = 0,
TransY = 0,
Persp2 = 1
};
var bitmap = origBitmap.Copy();
var newBitmap = new SKBitmap(newSize.Width, newSize.Height);
using (var canvas = new SKCanvas(newBitmap))
{
canvas.SetMatrix(affine);
canvas.DrawBitmap(bitmap, 0, 0);
canvas.Restore();
}
return newBitmap;
}
The resulting bitmap has the left side cut off. It also appears that it is not translated correctly. How do I properly apply this affine?
If I understood you right and the xyRotation is what I think it is from your description, then I think you were pretty close to the solution :)
public SKBitmap ApplyAffine(SKBitmap origBitmap, SKSizeI newSize, SKPointI xyRotation)
{
// mcoo: skew is the tangent of the skew angle, but since xyRotation is not normalized
// then it should be calculated based on original width/height
var skewX = (float)xyRotation.X / origBitmap.Height;
var skewY = (float)xyRotation.Y / origBitmap.Width;
// Scale transform
// mcoo (edit): we need to account here for the fact, that given skew is known AFTER the scale is applied
var scaleX = (float)(newSize.Width - Math.Abs(xyRotation.X)) / origBitmap.Width;
var scaleY = (float)(newSize.Height - Math.Abs(xyRotation.Y)) / origBitmap.Height;
// Affine transform
SKMatrix affine = new SKMatrix
{
ScaleX = scaleX,
SkewY = skewY,
SkewX = skewX,
ScaleY = scaleY,
//mcoo: we need to account for negative skew moving image bounds towards negative coords
TransX = Math.Max(0, -xyRotation.X),
TransY = Math.Max(0, -xyRotation.Y),
Persp2 = 1
};
var bitmap = origBitmap.Copy();
var newBitmap = new SKBitmap(newSize.Width, newSize.Height);
using (var canvas = new SKCanvas(newBitmap))
{
// canvas.Clear(SKColors.Red);
canvas.SetMatrix(affine);
canvas.DrawBitmap(bitmap, 0, 0);
}
return newBitmap;
}
Now calling ApplyAffine(skBitmap, new SKSizeI(273, 366), new SKPointI(-10,7)) on image of size 178x242 yields somewhat correct result (red background added for reference):

Rotate and Scale rectangle as per user control

I have UserControl of Size 300*200.
and rectangle of size 300*200.
graphics.DrawRectangle(Pens.Black, 0, 0, 300, 200);
When I rotate rectangle in userControl by 30 degree, I get rotated rectangle but it is outsized.
PointF center = new PointF(150,100);
graphics.FillRectangle(Brushes.Black, center.X, center.Y, 2, 2); // draw center point.
using (Matrix matrix = new Matrix())
{
matrix.RotateAt(30, center);
graphics.Transform = matrix;
graphics.DrawRectangle(Pens.Black, 0, 0, 300, 200);
graphics.ResetTransform();
}
I want to fit rectangle like actual result.Check Image here
Can anyone have solution about this.
Thanks.
It's more of a math question than programming one.
Calculate bouning box of any rectangle rotated by any angle in radians.
var newWidth= Math.Abs(height*Math.Sin(angle)) + Math.Abs(width*Math.Cos(angle))
var newHeight= Math.Abs(width*Math.Sin(angle)) + Math.Abs(height*Math.Cos(angle))
Calculate scale for x and y:
scaleX = width/newWidth;
scaleY = height/newHeight;
Apply it to your rectangle.
EDIT:
Applied to your example:
PointF center = new PointF(150, 100);
graphics.FillRectangle(Brushes.Black, center.X, center.Y, 2, 2); // draw center point.
var height = 200;
var width = 300;
var angle = 30;
var radians = angle * Math.PI / 180;
var boundingWidth = Math.Abs(height * Math.Sin(radians)) + Math.Abs(width * Math.Cos(radians));
var boundingHeight = Math.Abs(width * Math.Sin(radians)) + Math.Abs(height * Math.Cos(radians));
var scaleX = (float)(width / boundingWidth);
var scaleY = (float)(height / boundingHeight);
using (Matrix matrix = new Matrix())
{
matrix.Scale(scaleX, scaleY, MatrixOrder.Append);
matrix.Translate(((float)boundingWidth - width) / 2, ((float)boundingHeight - height) / 2);
matrix.RotateAt(angle, center);
graphics.Transform = matrix;
graphics.DrawRectangle(Pens.Black, 0, 0, width, height);
graphics.ResetTransform();
}

EMGU CV real time Eye tracking using C#

I am following Luca Del Tongo tutorial on youtube in order to track the eyes from face. I managed to do so using rectangle but I would like to track it using HoughCircle.
https://www.youtube.com/watch?v=07QAhRJmcKQ
I am using the following code to track my eyes and it is creating multiple circles around my eyes.
I only converted the image to gray scale as he told us to do in the tutorial. Can you please help? I am new to EMGU CV
grayFrame.ROI = possibleROI_leftEye;
MCvAvgComp[][] leftEyesDetected = grayFrame.DetectHaarCascade(_eyes, 1.15, 0, Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, new Size(20, 20));
grayFrame.ROI = Rectangle.Empty;
grayFrame.ROI = possibleROI_rightEye;
MCvAvgComp[][] rightEyesDetected = grayFrame.DetectHaarCascade(_eyes, 1.15, 0, Emgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, new Size(20, 20));
grayFrame.ROI = Rectangle.Empty;
//If we are able to find eyes inside the possible face, it should be a face, maybe we find also a couple of eyes
if (leftEyesDetected[0].Length != 0 && rightEyesDetected[0].Length != 0)
{
//draw the face
frame.Draw(face.rect, new Bgr(Color.Violet), 2);
#region Hough Circles Eye Detection
grayFrame.ROI = possibleROI_leftEye;
CircleF[] leftEyecircles = grayFrame.HoughCircles(new Gray(180), new Gray(70), 5.0, 10.0, 1, 200)[0];
grayFrame.ROI = Rectangle.Empty;
foreach (CircleF circle in leftEyecircles)
{
float x = circle.Center.X + startingLeftEyePointOptimized.X;
float y = circle.Center.Y + startingLeftEyePointOptimized.Y;
frame.Draw(new CircleF(new PointF(x, y), circle.Radius), new Bgr(Color.RoyalBlue), 4);
}
grayFrame.ROI = possibleROI_rightEye;
CircleF[] rightEyecircles = grayFrame.HoughCircles(new Gray(180), new Gray(70), 2.0, 20.0, 1, 5)[0];
grayFrame.ROI = Rectangle.Empty;
foreach (CircleF circle in rightEyecircles)
{
float x = circle.Center.X + startingPointSearchEyes.X;
float y = circle.Center.Y + startingPointSearchEyes.Y;
frame.Draw(new CircleF(new PointF(x, y), circle.Radius), new Bgr(Color.RoyalBlue), 4);
}
#endregion
Now I changed the part where it finds the eyes to
grayImageFrame.ROI = possibleROI_leftEye;
CircleF[] leftEyecircles = grayImageFrame.HoughCircles(new Gray(180), new Gray(70), 5.0, 10.0, 1, 20)[0];
if (leftEyecircles.Length > 0)
{
CircleF firstCircle = leftEyecircles[0]; // Pick first circle in list
float x = firstCircle.Center.X + startingPointSearchEyes.X;
float y = firstCircle.Center.Y + startingPointSearchEyes.Y;
ImageFrame.Draw(new CircleF(new PointF(x, y), firstCircle.Radius), new Bgr(Color.RoyalBlue), 4);
}
grayImageFrame.ROI = possibleROI_rightEye;
CircleF[] rightEyecircles = grayImageFrame.HoughCircles(new Gray(180), new Gray(70), 5.0, 10.0, 1, 20)[0];
grayImageFrame.ROI = Rectangle.Empty;
if (rightEyecircles.Length > 0)
{
CircleF firstCircle = rightEyecircles[0]; // Pick first circle in list
float x = firstCircle.Center.X + startingPointSearchEyes.X;
float y = firstCircle.Center.Y + startingPointSearchEyes.Y;
ImageFrame.Draw(new CircleF(new PointF(x, y), firstCircle.Radius), new Bgr(Color.RoyalBlue), 4);
}
Only one circle is showing but it is tracking parts around my eyes not my eyes :(
The reason you get multiple circles is simply because you are drawing all found circles with this for-loop
foreach (CircleF circle in rightEyecircles)
{
float x = circle.Center.X + startingPointSearchEyes.X;
float y = circle.Center.Y + startingPointSearchEyes.Y;
frame.Draw(new CircleF(new PointF(x, y), circle.Radius), new Bgr(Color.RoyalBlue), 4);
}
To draw only one circle, you must pick a single circle from the list (or possibly a composite estimate formed by a number of circles) and draw only that.
I am not really a C# guy, but I guess something like this would work
CircleF firstCircle = rightEyecircles[0]; // Pick first circle in list
float x = firstCircle.Center.X + startingPointSearchEyes.X;
float y = firstCircle.Center.Y + startingPointSearchEyes.Y;
frame.Draw(new CircleF(new PointF(x, y), firstCircle.Radius), new Bgr(Color.RoyalBlue), 4);
Unlike Hannes, I think this can be done using image processing methods to remove the amount of detections you receive, rather than just drawing one of the found circles.
Apply a Gaussian blur to reduce noise and avoid false circle detection:
GaussianBlur( src_gray, src_gray, Size(9, 9), 2, 2 );
Apply a min/max circle radius to the Hough Transform
min_radius = 0: Minimum radio to be detected. If unknown, put zero as default.
max_radius = 0: Maximum radius to be detected. If unknown, put zero as default
For detecting one eye ball in the specific region perform following tasks
*1- Use haarcascade to detect eye and select ROI of that eye and detect hough circle there.
2- convert into gray image
3- threshold the image*
grayFrame._ThresholdBinary(new Gray(33), new Gray(255));
4- now find eye ball from hough circles.
Happy coding.

Drawing a polygon according to the input coordinates

How can i draw a polygon according to the input coordinates which are given in C#.
You didn't show any code because based on those coordinate, you are applying some form of scaling to the image.
Using the Paint event of a PictureBox, here is an example using those coordinates on the screen. It fills in the polygon, then draws the border, then it loops through all the points to draw the red circle:
void pictureBox1_Paint(object sender, PaintEventArgs e) {
e.Graphics.SmoothingMode = SmoothingMode.AntiAlias;
e.Graphics.Clear(Color.White);
// draw the shading background:
List<Point> shadePoints = new List<Point>();
shadePoints.Add(new Point(0, pictureBox1.ClientSize.Height));
shadePoints.Add(new Point(pictureBox1.ClientSize.Width, 0));
shadePoints.Add(new Point(pictureBox1.ClientSize.Width,
pictureBox1.ClientSize.Height));
e.Graphics.FillPolygon(Brushes.LightGray, shadePoints.ToArray());
// scale the drawing larger:
using (Matrix m = new Matrix()) {
m.Scale(4, 4);
e.Graphics.Transform = m;
List<Point> polyPoints = new List<Point>();
polyPoints.Add(new Point(10, 10));
polyPoints.Add(new Point(12, 35));
polyPoints.Add(new Point(22, 35));
polyPoints.Add(new Point(24, 22));
// use a semi-transparent background brush:
using (SolidBrush br = new SolidBrush(Color.FromArgb(100, Color.Yellow))) {
e.Graphics.FillPolygon(br, polyPoints.ToArray());
}
e.Graphics.DrawPolygon(Pens.DarkBlue, polyPoints.ToArray());
foreach (Point p in polyPoints) {
e.Graphics.FillEllipse(Brushes.Red,
new Rectangle(p.X - 2, p.Y - 2, 4, 4));
}
}
}
You may use Graphics.DrawPolygon. You can store the coordinates in an array of Point and then you can pass that to DrawPolygon method. You may wanna see:
Drawing with Graphics in WinForms using C#
private System.Drawing.Graphics g;
System.Drawing.Point[] p = new System.Drawing.Point[6];
p[0].X = 0;
p[0].Y = 0;
p[1].X = 53;
p[1].Y = 111;
p[2].X = 114;
p[2].Y = 86;
p[3].X = 34;
p[3].Y = 34;
p[4].X = 165;
p[4].Y = 7;
g = PictureBox1.CreateGraphics();
g.DrawPolygon(pen1, p);
This simple function is able to generate an array of PointF equal to the vertices of the regular polygon to be drawn, where "center" is the center of the polygon, "sides" is its number of sides, "sideLength" is the size of each side in pixels and "offset" is its slope.
public PointF[] GetRegularPolygonScreenVertex(Point center, int sides, int sideLength, float offset)
{
var points = new PointF[sides];
for (int i = 0; i < sides; i++)
{
points[i] = new PointF(
(float)(center.X + sideLength * Math.Cos((i * 360 / sides + offset) * Math.PI / 180f)),
(float)(center.Y + sideLength * Math.Sin((i * 360 / sides + offset) * Math.PI / 180f))
);
}
return points;
}
The result obtained can be used to draw a polygon, e.g. with the function:
GraphicsObject.DrawPolygon(new Pen(Brushes.Black, GetRegularPolygonScreenVertex(new Point(X, Y), 6, 30, 60f));
Which will generate a regular hexagon with a side of 30 pixels inclined by 30°.
hex

Matrix transformations to recreate camera "Look At" functionality

Summary:
I'm given a series of points in 3D space, and I want to analyze them from any viewing angle. I'm trying to figure out how to reproduce the "Look At" functionality of OpenGL in WPF. I want the mouse move X,Y to manipulate the Phi and Theta Spherical Coordinates (respectively) of the camera so that I as I move my mouse, the camera appears to orbit around the center of mass (generally the origin) of the point cloud, which will represent the target of the Look At
What I've done:
I have made the following code, but so far it isn't doing what I want:
internal static Matrix3D CalculateLookAt(Vector3D eye, Vector3D at = new Vector3D(), Vector3D up = new Vector3D())
{
if (Math.Abs(up.Length - 0.0) < double.Epsilon) up = new Vector3D(0, 1, 0);
var zaxis = (at - eye);
zaxis.Normalize();
var xaxis = Vector3D.CrossProduct(up, zaxis);
xaxis.Normalize();
var yaxis = Vector3D.CrossProduct(zaxis, xaxis);
return new Matrix3D(
xaxis.X, yaxis.X, zaxis.X, 0,
xaxis.Y, yaxis.Y, zaxis.Y, 0,
xaxis.Z, yaxis.Z, zaxis.Z, 0,
Vector3D.DotProduct(xaxis, -eye), Vector3D.DotProduct(yaxis, -eye), Vector3D.DotProduct(zaxis, -eye), 1
);
}
I got the algorithm from this link: http://msdn.microsoft.com/en-us/library/bb205342(VS.85).aspx
I then apply the returned matrix to all of the points using this:
var vector = new Vector3D(p.X, p.Y, p.Z);
var projection = Vector3D.Multiply(vector, _camera); // _camera is the LookAt Matrix
if (double.IsNaN(projection.X)) projection.X = 0;
if (double.IsNaN(projection.Y)) projection.Y = 0;
if (double.IsNaN(projection.Z)) projection.Z = 0;
return new Point(
(dispCanvas.ActualWidth * projection.X / 320),
(dispCanvas.ActualHeight * projection.Y / 240)
);
I am calculating the center of all the points as the at vector, and I've been setting my initial eye vector at (center.X,center.Y,center.Z + 100) which is plenty far away from all the points
I then take the mouse move and apply the following code to get the Spherical Coordinates and put that into the CalculateLookAt function:
var center = GetCenter(_points);
var pos = e.GetPosition(Canvas4); //e is of type MouseButtonEventArgs
var delta = _previousPoint - pos;
double r = 100;
double theta = delta.Y * Math.PI / 180;
double phi = delta.X * Math.PI / 180;
var x = r * Math.Sin(theta) * Math.Cos(phi);
var y = r * Math.Cos(theta);
var z = -r * Math.Sin(theta) * Math.Sin(phi);
_camera = MathHelper.CalculateLookAt(new Vector3D(center.X * x, center.Y * y, center.Z * z), new Vector3D(center.X, center.Y, center.Z));
UpdateCanvas(); // Redraws the points on the canvas using the new _camera values
Conclusion:
This does not make the camera orbit around the points. So either my understanding of how to use the Look At function is off, or my math is incorrect.
Any help would be very much appreciated.
Vector3D won't transform in affine space. The Vector3D won't translate because it is a vector, which doesn't exist in affine space (i.e. 3D vector space with a translation component), only in vector space. You need a Point3D:
var m = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
10, 10, 10, 1);
var v = new Point3D(1, 1, 1);
var r = Point3D.Multiply(v, m); // 11,11,11
Note your presumed answer is also incorrect, as it should be 10 + 1 for each component, since your vector is [1,1,1].
Well, it turns out that the Matrix3D libraries have some interesting issues.
I noticed that Vector3D.Multiply(vector, matrix) would not translate the vector.
For example:
var matrixTest = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
10, 10, 10, 1
);
var vectorTest = new Vector3D(1, 1, 1);
var result = Vector3D.Multiply(vectorTest, matrixTest);
// result = {1,1,1}, should be {11,11,11}
I ended up having to rewrite some of the basic matrix math functions in order for the code to work.
Everything was working except for the logic side, it was the basic math (handled by the Matrix3D library) that was the problem.
Here is the fix. Replace all Vector3D.Multiply method calls with this:
public static Vector3D Vector3DMultiply(Vector3D vector, Matrix3D matrix)
{
return new Vector3D(
vector.X * matrix.M11 + vector.Y * matrix.M12 + vector.Z * matrix.M13 + matrix.OffsetX,
vector.X * matrix.M21 + vector.Y * matrix.M22 + vector.Z * matrix.M23 + matrix.OffsetY,
vector.X * matrix.M31 + vector.Y * matrix.M32 + vector.Z * matrix.M33 + matrix.OffsetZ
);
}
And everything works!

Categories

Resources