I built a system that rotates and scales objects using the most left and vertical center point of the object as the origin. After transforming, you can send the objects html/css3 information to the server and c# will attempt to redraw the scene that you created. C# is rotating the objects/images at the same degree but is rotating them from the vertical and horizontal center points this causes the object/images to change dimensions. I already have those changes calculated however there is an offset occurring with the x,y coordinates of the top left of the object/images
This is the method I've been attempting to work out to deal with the offsets:
int[] rotResult = new int[2];
int[] tLCoord = new int[2];
int[] tRCoord = new int[2];
int[] bLCoord = new int[2];
int[] bRCoord = new int[2];
int[] tLCoordTmp = new int[2];
int[] tRCoordTmp = new int[2];
int[] bLCoordTmp = new int[2];
int[] bRCoordTmp = new int[2];
float sin = (float)Math.Sin(angle * Math.PI / 180.0);
float cos = (float)Math.Cos(angle * Math.PI / 180.0);
tLCoord[0] = (originalX - Math.Abs(xMin));
tLCoord[1] = (originalY - Math.Abs(yMin));
tRCoord[0] = origwidth;
tRCoord[1] = 0;
bLCoord[0] = 0;
bLCoord[1] = (origheight * -1);
bRCoord[0] = origwidth;
bRCoord[1] = (origheight * -1);
tLCoordTmp[0] = Convert.ToInt32((tLCoord[0] * cos) - (tLCoord[1] * sin));
tLCoordTmp[1] = Convert.ToInt32((tLCoord[1] * cos) + (tLCoord[0] * sin));
tRCoordTmp[0] = Convert.ToInt32(((tLCoordTmp[0] + tRCoord[0]) * cos) - (tRCoord[1] * sin));
tRCoordTmp[1] = Convert.ToInt32(((tLCoordTmp[1] + tRCoord[1]) * cos) + (tRCoord[0] * sin));
bLCoordTmp[0] = Convert.ToInt32(((tLCoordTmp[0] + bLCoord[0]) * cos) - (bLCoord[1] * sin));
bLCoordTmp[1] = Convert.ToInt32(((tLCoordTmp[1] + bLCoord[1]) * cos) + (bLCoord[0] * sin));
bRCoordTmp[0] = Convert.ToInt32(((tLCoordTmp[0] + bRCoord[0]) * cos) - (bRCoord[1] * sin));
bRCoordTmp[1] = Convert.ToInt32(((tLCoordTmp[1] + bRCoord[1]) * cos) + (bRCoord[0] * sin));
if (angle >= 270)
{
rotResult[0] = tLCoordTmp[0];
rotResult[1] = tRCoordTmp[1];
}
else if (angle <= 90)
{
rotResult[0] = bLCoordTmp[0];
rotResult[1] = tLCoordTmp[1];
}
else if (angle > 90 && angle <= 180)
{
rotResult[0] = bRCoordTmp[0];
rotResult[1] = bLCoordTmp[1];
}
else if (angle > 180 && angle < 270)
{
rotResult[0] = tRCoordTmp[0];
rotResult[1] = bRCoordTmp[1];
}
return rotResult;
Immediately I know there are a few issues dealing with the way this formula works out in regards to coordinate planes and the way that c# and html/css both render visual elements, I've been running small experiments to offset against those and nothing seems to be getting any closer to a solution, any ideas?
The answer was to first calculate the image X,Y offsets from the top left corner of the object while rotating the image in c# via TranslateTransform and RotateTransform to prevent clipping, then independently calculate the proper X,Y coordinates for the object using the left most center point of the object after that it was only a matter of making a few conditional statements to deal with the quadrant based calculation differences
Related
So, I'm working on Pie Control with Orbit View from Windows Community toolkit.
I tried to change the behavior of Orbit View to arrange item started at the top instead of at the left center.
Here is when Orbit View has only one item.
I tried tracking it down and found the code that Orbit View used to arrange item. But the problem is that I know nothing about math and didn't know where to change the value :/ So, here is the code.
protected override Size ArrangeOverride(Size finalSize)
{
var angle = 2 * Math.PI / Children.Count;
var minDistance = 80;
var maxDistance = Math.Max(minDistance, (Math.Min(finalSize.Width, finalSize.Height) - OrbitView.MaxItemSize) / 2);
var elementsProperties = new List<OrbitViewElementProperties>();
for (var i = 0; i < Children.Count; i++)
{
var element = Children.ElementAt(i);
OrbitViewDataItem orbitViewDataItem = null;
if (element is FrameworkElement)
{
orbitViewDataItem = ((FrameworkElement)element).DataContext as OrbitViewDataItem;
}
var d = orbitViewDataItem != null && orbitViewDataItem.Distance >= 0 ? orbitViewDataItem.Distance : 0.5;
d = Math.Min(d, 1d);
var distance = (d * (maxDistance - minDistance)) + minDistance;
var x = distance * Math.Cos((angle * i) + (angle / 2));
var y = distance * Math.Sin((angle * i) + (angle / 2));
var x_normalized = (finalSize.Width / 2) + x - (element.DesiredSize.Width / 2);
var y_normalized = (finalSize.Height / 2) - y - (element.DesiredSize.Height / 2);
var point = new Point(x_normalized, y_normalized);
element.Arrange(new Rect(point, element.DesiredSize));
var elementProperties = new OrbitViewElementProperties()
{
XYFromCenter = new Point(x, y),
DistanceFromCenter = distance,
Element = element
};
elementsProperties.Add(elementProperties);
if (ItemArranged != null)
{
var args = new OrbitViewPanelItemArrangedArgs()
{
ElementProperties = elementProperties,
ItemIndex = i
};
ItemArranged.Invoke(this, args);
}
}
ItemsArranged?.Invoke(this, new OrbitViewPanelItemsArrangedArgs() { Elements = elementsProperties });
return finalSize;
}
So, where do I change it to make it start putting the item at the top center (0 degree) instead of left center (270 degree)
Edit: I fork the project and remove all item to just a few control here: https://github.com/ray1997/WindowsCommunityToolkit/tree/ForJustR
The code I mention above is in here: https://github.com/ray1997/WindowsCommunityToolkit/blob/ForJustR/Microsoft.Toolkit.Uwp.UI.Controls/OrbitView/OrbitViewPanel.cs line 90
var angle = 2 * Math.PI / Children.Count;
This looks like the angle is expressed in radians.
var distance = (d * (maxDistance - minDistance)) + minDistance;
var x = distance * Math.Cos((angle * i) + (angle / 2));
var y = distance * Math.Sin((angle * i) + (angle / 2));
and here it is used to calculate the x and y values for the elements.
Now adding 90° or 2 * Math.PI / 4 Radians will move it by 90°.
So that gets you
// offset the first element by 90°
var customOffset = 2 * Math.PI / 4;
var distance = (d * (maxDistance - minDistance)) + minDistance;
var x = distance * Math.Cos((angle * i) + (angle / 2) - customOffset);
var y = distance * Math.Sin((angle * i) + (angle / 2) - customOffset);
I'm trying to give certain colors to image based on their movement (like vector direction) in Emgu Cv. I have managed to calculate the dense optical flow to my video stream. I have used this
OpticalFlow.Farneback(prev,NextFrame,velx,vely,0.5,1,1,2,5,1.1,Emgu.CV.CvEnum.OPTICALFLOW_FARNEBACK_FLAG.FARNEBACK_GAUSSIAN);
The variable vely and velx contains the velocity of vertical and horizontal directions.Does anyone know how to map colors to these. There are many algorithms that calculates the dense flow. HS also can be used, but I'm not sure what to use.
Any solution would be really appreciated.
EDIT:
Optical Flow Color Map in OpenCV
This is the same thing that i wanted, since I'm using Emgu cv I tried to convert this code to c# but I cannot understand how to pass the dense flow to function "colorflow".
public void colorflow(MCvMat imgColor)
{
MCvMat imgHsv = new MCvMat();
double max_s = 0;
double[] hsv_ptr = new double[3000];
IntPtr[] color_ptr = new IntPtr[3000];
int r = 0, g = 0, b = 0;
double angle = 0;
double h = 0, s = 0, v = 0;
double deltaX = 0, deltaY = 0;
int x = 0, y = 0;
for (y = 0; y < imgColor.rows; y++)
{
for (x = 0; x < imgColor.cols; x++)
{
PointF fxy = new PointF(y, x);
deltaX = fxy.X;
deltaY = fxy.Y;
angle = Math.Atan2(deltaX, deltaY);
if (angle < 0)
angle += 2 * Math.PI;
hsv_ptr[3 * x] = angle * 180 / Math.PI;
hsv_ptr[3 * x + 1] = Math.Sqrt(deltaX * deltaX + deltaY * deltaY);
hsv_ptr[3 * x + 2] = 0.9;
if (hsv_ptr[3 * x + 1] > max_s)
max_s = hsv_ptr[3 * x + 1];
}
}
for (y = 0; y < imgColor.rows; y++)
{
//hsv_ptr=imgHsv.ptr<float>(y);
//color_ptr=imgColor.ptr<unsigned char>(y);
for (x = 0; x < imgColor.cols; x++)
{
h = hsv_ptr[3 * x];
s = hsv_ptr[3 * x + 1] / max_s;
v = hsv_ptr[3 * x + 2];
//hsv2rgb(h,s,v,r,g,b);
Color c = ColorFromHSV(h, s, v);
color_ptr[3 * x] = (IntPtr)c.B;
color_ptr[3 * x + 1] = (IntPtr)c.G;
color_ptr[3 * x + 2] = (IntPtr)c.R;
}
}
drawLegendHSV(imgColor, 15, 25, 15);
}
I having trouble how to covert the two commented lines in the code. Can anyone Help me with this.?
Another thing that the Farneback algorithm gives two images velx and vely. It does not gives the flow( MCvMat). The colorFlow algorithms it takes the MCvMat type parameters.Did i done any wrong with the code. thanks
I am using the following to create a circle using VertexPositionTexture:
public static ObjectData Circle(Vector2 origin, float radius, int slices)
{
/// See below
}
The texture that is applied to it doesn't look right, it spirals out from the center. I have tried some other things but nothing does it how I want. I would like for it to kind-of just fan around the circle, or start in the top-left end finish in the bottom-right. Basically wanting it to be easier to create textures for it.
I know that are MUCH easier ways to do this without using meshes, but that is not what I am trying to accomplish right now.
This is the code that ended up working thanks to Pinckerman:
public static ObjectData Circle(Vector2 origin, float radius, int slices)
{
VertexPositionTexture[] vertices = new VertexPositionTexture[slices + 2];
int[] indices = new int[slices * 3];
float x = origin.X;
float y = origin.Y;
float deltaRad = MathHelper.ToRadians(360) / slices;
float delta = 0;
float thetaInc = (((float)Math.PI * 2) / vertices.Length);
vertices[0] = new VertexPositionTexture(new Vector3(x, y, 0), new Vector2(.5f, .5f));
float sliceSize = 1f / slices;
for (int i = 1; i < slices + 2; i++)
{
float newX = (float)Math.Cos(delta) * radius + x;
float newY = (float)Math.Sin(delta) * radius + y;
float textX = 0.5f + ((radius * (float)Math.Cos(delta)) / (radius * 2));
float textY = 0.5f + ((radius * (float)Math.Sin(delta)) /(radius * 2));
vertices[i] = new VertexPositionTexture(new Vector3(newX, newY, 0), new Vector2(textX, textY));
delta += deltaRad;
}
indices[0] = 0;
indices[1] = 1;
for (int i = 0; i < slices; i++)
{
indices[3 * i] = 0;
indices[(3 * i) + 1] = i + 1;
indices[(3 * i) + 2] = i + 2;
}
ObjectData thisData = new ObjectData()
{
Vertices = vertices,
Indices = indices
};
return thisData;
}
public static ObjectData Ellipse()
{
ObjectData thisData = new ObjectData()
{
};
return thisData;
}
ObjectData is just a structure that contains an array of vertices & an array of indices.
Hope this helps others that may be trying to accomplish something similar.
It looks like a spiral because you've set the upper-left point for the texture Vector2(0,0) in the center of your "circle" and it's wrong. You need to set it on the top-left vertex of the top-left slice of you circle, because 0,0 of your UV map is the upper left corner of your texture.
I think you need to set (0.5, 0) for the upper vertex, (1, 0.5) for the right, (0.5, 1) for the lower and (0, 0.5) for the left, or something like this, and for the others use some trigonometry.
The center of your circle has to be Vector2(0.5, 0.5).
Regarding the trigonometry, I think you should do something like this.
The center of your circle has UV value of Vector2(0.5, 0.5), and for the others (supposing the second point of the sequence is just right to the center, having UV value of Vector2(1, 0.5)) try something like this:
vertices[i] = new VertexPositionTexture(new Vector3(newX, newY, 0), new Vector2(0.5f + radius * (float)Math.Cos(delta), 0.5f - radius * (float)Math.Sin(delta)));
I've just edited your third line in the for-loop. This should give you the UV coordinates you need for each point. I hope so.
This question already has answers here:
Rotating a point about another point (2D)
(6 answers)
Closed 9 years ago.
I have list of points containing x and y locations of a page. I want to apply rotation on all these points relative to any pivot point of page (currently lets assume its center).
var points = new List<Point>();
points.Add(1,1);
points.Add(15,18);
points.Add(25,2);
points.Add(160,175);
points.Add(150,97);
const int pageHeight = 300;
const int pageWidth = 400;
var pivotPoint = new Point(200, 150); //Center
var angle = 45; // its in degree.
// Apply rotation.
Do I need some formula here?
public static Point Rotate(Point point, Point pivot, double angleDegree)
{
double angle = angleDegree * Math.PI / 180;
double cos = Math.Cos(angle);
double sin = Math.Sin(angle);
int dx = point.X - pivot.X;
int dy = point.Y - pivot.Y;
double x = cos * dx - sin * dy + pivot.X;
double y = sin * dx + cos * dy + pivot.X;
Point rotated = new Point((int)Math.Round(x), (int)Math.Round(y));
return rotated;
}
static void Main(string[] args)
{
Console.WriteLine(Rotate(new Point(1, 1), new Point(0, 0), 45));
}
If you have a large number of points to rotate, you might want to precompute the rotation matrix…
[C -S U]
[S C V]
[0 0 1]
…where…
C = cos(θ)
S = sin(θ)
U = (1 - C) * pivot.x + S * pivot.y
V = (1 - C) * pivot.y - S * pivot.x
You then rotate each point as follows:
rotated.x = C * original.x - S * original.y + U;
rotated.x = S * original.x + C * original.y + V;
The above formula is the result of combining three transforms…
rotated = translate(pivot) * rotate(θ) * translate(-pivot) * original
…where…
translate([x y]) = [1 0 x]
[0 1 y]
[0 0 1]
rotate(θ) = [cos(θ) -sin(θ) 0]
[sin(θ) cos(θ) 0]
[ 0 0 1]
if u rotate a point(x,y) around point (x1,y1) by an angle some a then you need a formula...
x2 = cos(a) * (x-x1) - sin(a) * (y-y1) + x1
y2 = sin(a) * (x-x1) + cos(a) * (y-y1) + y1
Point newRotatedPoint = new Point(x2,y2)
Summary:
I'm given a series of points in 3D space, and I want to analyze them from any viewing angle. I'm trying to figure out how to reproduce the "Look At" functionality of OpenGL in WPF. I want the mouse move X,Y to manipulate the Phi and Theta Spherical Coordinates (respectively) of the camera so that I as I move my mouse, the camera appears to orbit around the center of mass (generally the origin) of the point cloud, which will represent the target of the Look At
What I've done:
I have made the following code, but so far it isn't doing what I want:
internal static Matrix3D CalculateLookAt(Vector3D eye, Vector3D at = new Vector3D(), Vector3D up = new Vector3D())
{
if (Math.Abs(up.Length - 0.0) < double.Epsilon) up = new Vector3D(0, 1, 0);
var zaxis = (at - eye);
zaxis.Normalize();
var xaxis = Vector3D.CrossProduct(up, zaxis);
xaxis.Normalize();
var yaxis = Vector3D.CrossProduct(zaxis, xaxis);
return new Matrix3D(
xaxis.X, yaxis.X, zaxis.X, 0,
xaxis.Y, yaxis.Y, zaxis.Y, 0,
xaxis.Z, yaxis.Z, zaxis.Z, 0,
Vector3D.DotProduct(xaxis, -eye), Vector3D.DotProduct(yaxis, -eye), Vector3D.DotProduct(zaxis, -eye), 1
);
}
I got the algorithm from this link: http://msdn.microsoft.com/en-us/library/bb205342(VS.85).aspx
I then apply the returned matrix to all of the points using this:
var vector = new Vector3D(p.X, p.Y, p.Z);
var projection = Vector3D.Multiply(vector, _camera); // _camera is the LookAt Matrix
if (double.IsNaN(projection.X)) projection.X = 0;
if (double.IsNaN(projection.Y)) projection.Y = 0;
if (double.IsNaN(projection.Z)) projection.Z = 0;
return new Point(
(dispCanvas.ActualWidth * projection.X / 320),
(dispCanvas.ActualHeight * projection.Y / 240)
);
I am calculating the center of all the points as the at vector, and I've been setting my initial eye vector at (center.X,center.Y,center.Z + 100) which is plenty far away from all the points
I then take the mouse move and apply the following code to get the Spherical Coordinates and put that into the CalculateLookAt function:
var center = GetCenter(_points);
var pos = e.GetPosition(Canvas4); //e is of type MouseButtonEventArgs
var delta = _previousPoint - pos;
double r = 100;
double theta = delta.Y * Math.PI / 180;
double phi = delta.X * Math.PI / 180;
var x = r * Math.Sin(theta) * Math.Cos(phi);
var y = r * Math.Cos(theta);
var z = -r * Math.Sin(theta) * Math.Sin(phi);
_camera = MathHelper.CalculateLookAt(new Vector3D(center.X * x, center.Y * y, center.Z * z), new Vector3D(center.X, center.Y, center.Z));
UpdateCanvas(); // Redraws the points on the canvas using the new _camera values
Conclusion:
This does not make the camera orbit around the points. So either my understanding of how to use the Look At function is off, or my math is incorrect.
Any help would be very much appreciated.
Vector3D won't transform in affine space. The Vector3D won't translate because it is a vector, which doesn't exist in affine space (i.e. 3D vector space with a translation component), only in vector space. You need a Point3D:
var m = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
10, 10, 10, 1);
var v = new Point3D(1, 1, 1);
var r = Point3D.Multiply(v, m); // 11,11,11
Note your presumed answer is also incorrect, as it should be 10 + 1 for each component, since your vector is [1,1,1].
Well, it turns out that the Matrix3D libraries have some interesting issues.
I noticed that Vector3D.Multiply(vector, matrix) would not translate the vector.
For example:
var matrixTest = new Matrix3D(
1, 0, 0, 0,
0, 1, 0, 0,
0, 0, 1, 0,
10, 10, 10, 1
);
var vectorTest = new Vector3D(1, 1, 1);
var result = Vector3D.Multiply(vectorTest, matrixTest);
// result = {1,1,1}, should be {11,11,11}
I ended up having to rewrite some of the basic matrix math functions in order for the code to work.
Everything was working except for the logic side, it was the basic math (handled by the Matrix3D library) that was the problem.
Here is the fix. Replace all Vector3D.Multiply method calls with this:
public static Vector3D Vector3DMultiply(Vector3D vector, Matrix3D matrix)
{
return new Vector3D(
vector.X * matrix.M11 + vector.Y * matrix.M12 + vector.Z * matrix.M13 + matrix.OffsetX,
vector.X * matrix.M21 + vector.Y * matrix.M22 + vector.Z * matrix.M23 + matrix.OffsetY,
vector.X * matrix.M31 + vector.Y * matrix.M32 + vector.Z * matrix.M33 + matrix.OffsetZ
);
}
And everything works!