I'm trying to rotate (around it's center) and place an image on top of another image.
After rotation, the XY coordinates i expected it to be in, is all wrong.
An example on how to do this would be greatly appreciated
Currently drawing debug frames instead of an image.
My coordinate system is based on a center position of the placement, but i could switch to a Top Left one for the XY coordinates
private static void DrawDebugFrames(List<LogoPlacementContentDto> placements, Image<Rgba32> mutatedImage)
{
foreach (var placement in placements)
{
var width = placement.Width;
var height = placement.Height;
using (var logo = new Image<Rgba32>(Configuration.Default, width, height))
{
var centerX = placement.X; // center of imagePlacement
var centerY = placement.Y; // center of imagePlacement
var affineBuilder = new AffineTransformBuilder();
affineBuilder.PrependTranslation(new Vector2(centerX, centerY));
affineBuilder.PrependRotationDegrees(placement.Rotation);
logo.Mutate(
x => x
.BackgroundColor(Rgba32.Beige).DrawPolygon(
Rgba32.HotPink,
4,
new Vector2(0, 0),
new Vector2(width, 0),
new Vector2(width, height),
new Vector2(0, height)
)
.Transform(affineBuilder)
);
mutatedImage.Mutate(
x => x
.DrawImage(logo, new Point(-(width / 2), -(height / 2)), GraphicsOptions.Default)
);
}
}
}
(Image) Expected result (editor)
(Image) Result
I was able to solve this.
The issue was that the client never sent the XY coordinate of the bounding box.
Instead i tried to use the XY of the top left corner or the center position XY.
With this fixed. i adjusted the code slightly to reflect this change.
private static void DrawDebugFrames(List<LogoPlacementContentDto> placements, Image<Rgba32> mutatedImage)
{
foreach (var placement in placements)
{
var width = placement.WidthInt;
var height = placement.HeightInt;
using (var logo = new Image<Rgba32>(Configuration.Default, width, height))
{
var positionX = placement.Position.X;
var positionY = placement.Position.Y;
var affineBuilder = new AffineTransformBuilder();
affineBuilder.PrependTranslation(new Vector2(positionX, positionY));
affineBuilder.PrependRotationDegrees(placement.Rotation);
affineBuilder.AppendTranslation(new Vector2(-positionX, -positionY));
logo.Mutate(
x => x
.BackgroundColor(Rgba32.Beige).DrawPolygon(
Rgba32.HotPink,
4,
new Vector2(0, 0),
new Vector2(width, 0),
new Vector2(width, height),
new Vector2(0, height)
)
.Transform(affineBuilder)
);
mutatedImage.Mutate(
x => x
.DrawImage(logo, new Point(placement.Position.XInt, placement.Position.YInt), GraphicsOptions.Default)
);
}
}
}
Related
I am trying to scale and skew a bitmap in SkiaSharp with an affine matrix, however; the results always cut part of the resulting bitmap. I am also not sure if my affine matrix has the correct values.
Here is a diagram of what I am trying to accomplish: on the left is the original image. It has a bitmap size of (178x242). On the right is the scaled and skewed image. The bounding box is (273x366), I also know that the the x scale has been skewed -10 pixels and the y scale has been skewed 7 pixels.
Here if my code for applying the affine matrix:
public SKBitmap ApplyAffine(SKBitmap origBitmap, SKSizeI newSize, SKPointI xyRotation)
{
var skewX = 1f / xyRotation.X;
var skewY = 1f / xyRotation.Y;
// Scale transform
var scaleX = (newSize.Width / (float)origBitmap.Width);
var scaleY = (newSize.Height / (float)origBitmap.Height);
// Affine transform
SKMatrix affine = new SKMatrix
{
ScaleX = scaleX,
SkewY = skewY,
SkewX = skewX,
ScaleY = scaleY,
TransX = 0,
TransY = 0,
Persp2 = 1
};
var bitmap = origBitmap.Copy();
var newBitmap = new SKBitmap(newSize.Width, newSize.Height);
using (var canvas = new SKCanvas(newBitmap))
{
canvas.SetMatrix(affine);
canvas.DrawBitmap(bitmap, 0, 0);
canvas.Restore();
}
return newBitmap;
}
The resulting bitmap has the left side cut off. It also appears that it is not translated correctly. How do I properly apply this affine?
If I understood you right and the xyRotation is what I think it is from your description, then I think you were pretty close to the solution :)
public SKBitmap ApplyAffine(SKBitmap origBitmap, SKSizeI newSize, SKPointI xyRotation)
{
// mcoo: skew is the tangent of the skew angle, but since xyRotation is not normalized
// then it should be calculated based on original width/height
var skewX = (float)xyRotation.X / origBitmap.Height;
var skewY = (float)xyRotation.Y / origBitmap.Width;
// Scale transform
// mcoo (edit): we need to account here for the fact, that given skew is known AFTER the scale is applied
var scaleX = (float)(newSize.Width - Math.Abs(xyRotation.X)) / origBitmap.Width;
var scaleY = (float)(newSize.Height - Math.Abs(xyRotation.Y)) / origBitmap.Height;
// Affine transform
SKMatrix affine = new SKMatrix
{
ScaleX = scaleX,
SkewY = skewY,
SkewX = skewX,
ScaleY = scaleY,
//mcoo: we need to account for negative skew moving image bounds towards negative coords
TransX = Math.Max(0, -xyRotation.X),
TransY = Math.Max(0, -xyRotation.Y),
Persp2 = 1
};
var bitmap = origBitmap.Copy();
var newBitmap = new SKBitmap(newSize.Width, newSize.Height);
using (var canvas = new SKCanvas(newBitmap))
{
// canvas.Clear(SKColors.Red);
canvas.SetMatrix(affine);
canvas.DrawBitmap(bitmap, 0, 0);
}
return newBitmap;
}
Now calling ApplyAffine(skBitmap, new SKSizeI(273, 366), new SKPointI(-10,7)) on image of size 178x242 yields somewhat correct result (red background added for reference):
I have two rectangle:
first parent rotated -15 degrees relative to the center of the canvas
next children rotated -15 degrees relative to the center of the canvas and rotated 5 degrees relative to the center of parent.
Taking the original image:
Made the described modifications in the image editor:
It is necessary to repeat these operations with rectangles, here is my code:
var parentAngle = -15;
var childrenAngle = 5;
var parent = new Rectangle(new Point(50, 160), new Size(200, 300));
var children = new Rectangle(new Point(25, 175), new Size(50, 50));
// load transformed file to as canvas
var bmp = Image.FromFile(#"D:\Temp\transform.png");
var size = bmp.Size;
var canvasCenter = new PointF(size.Width / 2, size.Height / 2);
var parentCenter = new PointF(parent.Location.X + parent.Width / 2, parent.Location.Y + parent.Height / 2);
var parentLocation = parent.Location;
var parentVertices = parent.GetVertices();
var childrenVertices = children.GetVertices();
// rotate by canvas center
var rotateMatrix = new Matrix();
rotateMatrix.RotateAt(parentAngle, canvasCenter);
rotateMatrix.TransformPoints(parentVertices);
// rotate children vertices
var rotateMatrix2 = new Matrix();
rotateMatrix2.RotateAt(childrenAngle, parentCenter);
rotateMatrix2.TransformPoints(childrenVertices);
// translate vertices
var translateMatrix = new Matrix();
translateMatrix.Translate(parentLocation.X, parentLocation.Y);
translateMatrix.TransformPoints(childrenVertices);
// rotate by canvas center
rotateMatrix.TransformPoints(childrenVertices);
using (Graphics g = Graphics.FromImage(bmp))
{
g.DrawPolygon(Pens.Green, parentVertices);
g.DrawPolygon(Pens.Blue, childrenVertices);
}
Result:
I was mistaken somewhere and parent matches but children don't match. Maybe everything breaks down at the calculate parent offset?
Update:
The GetVertices function is implemented as a helper and looks like this:
public static PointF[] GetVertices(this Rectangle rect)
{
return new[] {
rect.Location,
new PointF(rect.Right, rect.Top),
new PointF(rect.Right, rect.Bottom),
new PointF(rect.Left, rect.Bottom)
};
}
I found a few problems:
First - paint.net rotate selected layer relative to the center of the canvas. Therefore, nothing came together and had to redraw the test case
Next - I had to redo the calculation of transferring the location of the child to the top.
Now it looks like this:
var parentAngle = -15;
var childrenAngle = 5;
var parent = new Rectangle(new Point(50, 160), new Size(200, 300));
var children = new Rectangle(new Point(25, 175), new Size(50, 50));
// load transformed file to as canvas
var bmp = Image.FromFile(#"D:\Temp\rotate_5.png");
var size = bmp.Size;
var canvasCenter = new PointF(size.Width / 2, size.Height / 2);
var parentLocation = parent.Location;
var parentCenter = new PointF(parentLocation.X + parent.Width / 2, parentLocation.Y + parent.Height / 2);
var childrenLocation = children.Location;
// translate location children by parent location
children.Location = childrenLocation = new Point(parentLocation.X + childrenLocation.X, childrenLocation.Y + parentLocation.Y);
var childrenCenter = new PointF(childrenLocation.X + children.Width / 2, childrenLocation.Y + children.Height / 2);
var parentVertices = parent.GetVertices();
var childrenVertices = children.GetVertices();
//rotate by canvas center
var rotateChildrenMatrix = new Matrix();
rotateChildrenMatrix.RotateAt(childrenAngle, parentCenter);
rotateChildrenMatrix.TransformPoints(childrenVertices);
// rotate by canvas center
var rotateMatrix = new Matrix();
rotateMatrix.RotateAt(parentAngle, canvasCenter);
rotateMatrix.TransformPoints(parentVertices);
rotateMatrix.TransformPoints(childrenVertices);
using (Graphics g = Graphics.FromImage(bmp))
{
g.DrawPolygon(Pens.Green, parentVertices);
g.DrawPolygon(Pens.Blue, childrenVertices);
}
Result:
I am using a matrix to translate then rotate in 3d (x, y, z) using the xRotate, yRotate, zRotate, depth == 300 vars.
using (var bmp = new SKBitmap(800, 600))
using (var canvas = new SKCanvas(bmp))
using (var paint = new SKPaint())
{
canvas.Clear(SKColors.White);
paint.IsAntialias = true;
// Find center of canvas
var info = bmp.Info;
float xCenter = info.Width / 2;
float yCenter = info.Height / 2;
// Translate center to origin
SKMatrix matrix = SKMatrix.MakeTranslation(-xCenter, -yCenter);
// Use 3D matrix for 3D rotations and perspective
SKMatrix44 matrix44 = SKMatrix44.CreateIdentity();
matrix44.PostConcat(SKMatrix44.CreateRotationDegrees(1, 0, 0, xRotate));
matrix44.PostConcat(SKMatrix44.CreateRotationDegrees(0, 1, 0, yRotate));
matrix44.PostConcat(SKMatrix44.CreateRotationDegrees(0, 0, 1, zRotate));
SKMatrix44 perspectiveMatrix = SKMatrix44.CreateIdentity();
perspectiveMatrix[3, 2] = -1 / depth;
matrix44.PostConcat(perspectiveMatrix);
// Concatenate with 2D matrix
SKMatrix.PostConcat(ref matrix, matrix44.Matrix);
// Translate back to center
SKMatrix.PostConcat(ref matrix,
SKMatrix.MakeTranslation(xCenter, yCenter));
// Set the matrix and display the bitmap
canvas.SetMatrix(matrix);
canvas.DrawBitmap(currentImage, 50, 25, paint);
pictureBox1.Image = bmp.ToBitmap();
}
If I have some Point in the original currentImage, I want to calculate its new location after drawing the transformed image. How can I do that? Would I reuse the matrix to calculate it?
Found the answer. Let the point be (1, 2) in the currentImage. Then simply:
var newPoint = matrix.MapPoint(1, 2);
newPoint =new SkPoint(50 + newPoint.X, 25 + newPoint.Y); // + offsets of DrawImage
Or to draw on a canvas that already mapped using canvas.SetMatrix
var newPoint = new SKPoint(1, 2);
canvas.DrawCircle(newPoint.X + 50, newPoint.Y + 25, 7, paint); // + offsets of DrawImage
How can i draw a polygon according to the input coordinates which are given in C#.
You didn't show any code because based on those coordinate, you are applying some form of scaling to the image.
Using the Paint event of a PictureBox, here is an example using those coordinates on the screen. It fills in the polygon, then draws the border, then it loops through all the points to draw the red circle:
void pictureBox1_Paint(object sender, PaintEventArgs e) {
e.Graphics.SmoothingMode = SmoothingMode.AntiAlias;
e.Graphics.Clear(Color.White);
// draw the shading background:
List<Point> shadePoints = new List<Point>();
shadePoints.Add(new Point(0, pictureBox1.ClientSize.Height));
shadePoints.Add(new Point(pictureBox1.ClientSize.Width, 0));
shadePoints.Add(new Point(pictureBox1.ClientSize.Width,
pictureBox1.ClientSize.Height));
e.Graphics.FillPolygon(Brushes.LightGray, shadePoints.ToArray());
// scale the drawing larger:
using (Matrix m = new Matrix()) {
m.Scale(4, 4);
e.Graphics.Transform = m;
List<Point> polyPoints = new List<Point>();
polyPoints.Add(new Point(10, 10));
polyPoints.Add(new Point(12, 35));
polyPoints.Add(new Point(22, 35));
polyPoints.Add(new Point(24, 22));
// use a semi-transparent background brush:
using (SolidBrush br = new SolidBrush(Color.FromArgb(100, Color.Yellow))) {
e.Graphics.FillPolygon(br, polyPoints.ToArray());
}
e.Graphics.DrawPolygon(Pens.DarkBlue, polyPoints.ToArray());
foreach (Point p in polyPoints) {
e.Graphics.FillEllipse(Brushes.Red,
new Rectangle(p.X - 2, p.Y - 2, 4, 4));
}
}
}
You may use Graphics.DrawPolygon. You can store the coordinates in an array of Point and then you can pass that to DrawPolygon method. You may wanna see:
Drawing with Graphics in WinForms using C#
private System.Drawing.Graphics g;
System.Drawing.Point[] p = new System.Drawing.Point[6];
p[0].X = 0;
p[0].Y = 0;
p[1].X = 53;
p[1].Y = 111;
p[2].X = 114;
p[2].Y = 86;
p[3].X = 34;
p[3].Y = 34;
p[4].X = 165;
p[4].Y = 7;
g = PictureBox1.CreateGraphics();
g.DrawPolygon(pen1, p);
This simple function is able to generate an array of PointF equal to the vertices of the regular polygon to be drawn, where "center" is the center of the polygon, "sides" is its number of sides, "sideLength" is the size of each side in pixels and "offset" is its slope.
public PointF[] GetRegularPolygonScreenVertex(Point center, int sides, int sideLength, float offset)
{
var points = new PointF[sides];
for (int i = 0; i < sides; i++)
{
points[i] = new PointF(
(float)(center.X + sideLength * Math.Cos((i * 360 / sides + offset) * Math.PI / 180f)),
(float)(center.Y + sideLength * Math.Sin((i * 360 / sides + offset) * Math.PI / 180f))
);
}
return points;
}
The result obtained can be used to draw a polygon, e.g. with the function:
GraphicsObject.DrawPolygon(new Pen(Brushes.Black, GetRegularPolygonScreenVertex(new Point(X, Y), 6, 30, 60f));
Which will generate a regular hexagon with a side of 30 pixels inclined by 30°.
hex
I have a bitmap object (or even any other image) and I'm drawing some lines on this bitmap to create a polygon.
after the drawing I need to clone/copy/cut the selection (based on the lines) area.
I cant use the bitmap.clone method becuase its working only with rectangle.
I need some kind of a clone implementation based on Point[] or GraphicsPath...
Please help new to GDI/Graphics... :)
Update
I tried doing something like this:
Graphics g = pbImage.CreateGraphics();
g.Clip = new Region(path);
Image img = null;
g.DrawImage(img, new Point(0, 0));
Can you provide a code example? I'm new for the GDI+ and I cant implement what you suggested.
I dont understand the:
another buffer/temp graphics object
An example of Barndon Moretzs solution.
int x = 0;
int y = 0;
int width = 0;
int height = 0;
Point[] pesource = null;
GraphicsPath gpdest = new GraphicsPath();
source = new Bitmap(Image.FromFile(#"IMAGEPATH"));
//Your polygon
pesource = new Point[]
{
new Point(10,100),
new Point(30,150),
new Point(40,170),
new Point(60,120),
new Point(70,250),
new Point(40,300),
new Point(10,250),
new Point(30,150)
};
//Determine the destination size/position
x = source.Width;
y = source.Height;
foreach (var p in pesource)
{
if (p.X < x)
x = p.X;
if (p.X > width)
width = p.X;
if (p.Y < y)
y = p.Y;
if (p.Y > height)
height = p.Y;
}
height = height - y;
width = width - x;
gpdest.AddPolygon(pesource);
Matrix m = new Matrix(1, 0, 0, 1, -x, -y);
gpdest.Transform(m);
//Create the Bitmap
clipped = new Bitmap(width, height);
//Draw on the Bitmap
using (Graphics g = Graphics.FromImage(clipped))
{
GraphicsPath gpgdi = new GraphicsPath();
g.SetClip(gpdest);
g.DrawImage(source, -x, -y);
}
You can use the Graphics.Clip to specify a custom clipping region (from a GraphicsPath) created from your "source" bitmap/image, then redraw it on another buffer/temp graphics object which should give you the desired result.
This isn't the most efficient solution, but it should at least get you going in the right direction.