I wanted to know if writing points using a for loop in the begin end batch works or not, so I read up on a sphere algorithm and produced this based on my reading. There are some problems with it as you can see below in the output screen capture. My goal is to produce a sphere procedurally and then modify it at runtime.
but I would like to set my goal on the short-term and figure out why the faces are not correct. anyone have any ideas?
I've got this code:
private void openGLControl_OpenGLDraw(object sender, RenderEventArgs e)
{
// Get the OpenGL object.
OpenGL gl = openGLControl.OpenGL;
// Clear the color and depth buffer.
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);
// Load the identity matrix.
gl.LoadIdentity();
// Rotate around the Y axis.
gl.Rotate(rotation, 0.0f, 1.0f, 0.0f);
//Draw a ball
//Drawing Mode
gl.PolygonMode(SharpGL.Enumerations.FaceMode.FrontAndBack, SharpGL.Enumerations.PolygonMode.Lines);
//ball fields
double radius = 4.0d;
const double DEGREE = Math.PI/11.25;
double x = 0;
double y = 0;
double z = 0;
// ball batch
gl.Begin(OpenGL.GL_TRIANGLE_STRIP_ADJACENCY);
for (double j = 0.0d; j < Math.PI; j = j +DEGREE)
{
for (double i = 0; i < 2 * Math.PI; i = i + DEGREE)
{
x = radius * Math.Cos(i) * Math.Sin(j);
y = radius * Math.Sin(j) * Math.Sin(i);
z = radius * Math.Cos(j);
gl.Color(Math.Abs(x + y), Math.Abs(y + z), Math.Abs(z + x));
gl.Vertex(x, y, z);
}
}
gl.End();
// Nudge the rotation.
rotation += 3.0f;
}
Related
I'm attempting to write a matrix transform to convert chart points to device pixels in SkiaSharp. I have it functional as long as I use 0,0 as my minimum chart coordinates but if I need to to step up from a negative number, it causes the drawing to shift left and down. That is to say that the X Axis is shifted to the left off the window and the Y Axis is shift down off the window.
This is intended to be a typical line chart (minimum chart point at the lower left while minimum device point at the upper left). I have accounted for that already in the transform.
While stepping through code I can see that the coordinates returned from the Matrix are not what I expect them to be, so I believe the issue to be with my transform but I haven't been able to pinpoint it.
UPDATE: After further examination, I believe I was mistaken, it is not shifted, it's just not scaling properly to the max end of the screen. There is a bigger margin at the top and right side of the chart than there should be, but the bottom and left side are fine. I've been undable to determine why the scaling doesn't fill the canvas.
Below are my matrix methods:
private SKMatrix ChartToDeviceMatrix, DeviceToChartMatrix;
private void ConfigureTransforms(SKPoint ChartMin,
SKPoint ChartMax, SKPoint DeviceMin, SKPoint DeviceMax)
{
this.ChartToDeviceMatrix = SKMatrix.MakeIdentity();
float xScale = (DeviceMax.X - DeviceMin.X) / (ChartMax.X - ChartMin.X);
float yScale = (DeviceMin.Y - DeviceMax.Y) / (ChartMax.Y - ChartMin.Y);
this.ChartToDeviceMatrix.SetScaleTranslate(xScale, yScale, DeviceMin.X, DeviceMax.Y);
this.ChartToDeviceMatrix.TryInvert(out this.DeviceToChartMatrix);
}
// Transform a point from chart to device coordinates.
private SKPoint ChartToDevice(SKPoint point)
{
return this.ChartToDeviceMatrix.MapPoint(point);
}
The code invoking this is:
void OnCanvasViewPaintSurface(object sender, SKPaintSurfaceEventArgs args)
{
SKImageInfo info = args.Info;
SKSurface surface = args.Surface;
SKCanvas canvas = surface.Canvas;
float strokeWidth = 1;
float margin = 10;
// SKPaint definitions omitted for brevity.
var ChartMin = new SKPoint(-10, -1); // Works fine if I change this to 0,0
var ChartMax = new SKPoint(110, 11);
var DeviceMin = new SKPoint(margin, margin);
var DeviceMax = new SKPoint(info.Width - margin, info.Height - margin);
const float stepX = 10;
const float stepY = 1;
const float tickX = 0.5;
const float tickY = 0.075F;
// Prepare the transformation matrices.
this.ConfigureTransforms(ChartMin, ChartMax, DeviceMin, DeviceMax);
// Draw the X axis.
var lineStart = new SKPoint(ChartMin.X, 0);
var lineEnd = new SKPoint(ChartMax.X, 0);
canvas.DrawLine(this.ChartToDevice(lineStart), this.ChartToDevice(lineEnd), axisPaint);
// X Axis Tick Marks
for (float x = stepX; x <= ChartMax.X - stepX; x += stepX)
{
var tickMin = new SKPoint(x, -tickY);
var tickMax = new SKPoint(x, tickY);
canvas.DrawLine(this.ChartToDevice(tickMin), this.ChartToDevice(tickMax), axisPaint);
}
// Draw the Y axis.
// The inversion of above, basically the same.
I was able to discover my own problem with enough time. I wasn't calculating the offset correct.
this.ChartToDeviceMatrix.SetScaleTranslate(xScale, yScale, DeviceMin.X, DeviceMax.X);
Should have been:
this.ChartToDeviceMatrix.SetScaleTranslate(xScale, yScale, -ChartMin.X * xScale + DeviceMin.Y, -ChartMin.Y * yScale + DeviceMax.Y);
Final Matrix method was:
private SKMatrix ChartToDeviceMatrix, DeviceToChartMatrix;
private void ConfigureTransforms(SKPoint ChartMin, SKPoint ChartMax, SKPoint DeviceMin, SKPoint DeviceMax)
{
this.ChartToDeviceMatrix = SKMatrix.MakeIdentity();
float xScale = (DeviceMax.X - DeviceMin.X) / (ChartMax.X - ChartMin.X);
float yScale = (DeviceMin.Y - DeviceMax.Y) / (ChartMax.Y - ChartMin.Y);
float xOffset = -ChartMin.X * xScale + DeviceMin.X;
float yOffset = -ChartMin.Y * yScale + DeviceMax.Y;
this.ChartToDeviceMatrix.SetScaleTranslate(xScale, yScale, xOffset, yOffset);
this.ChartToDeviceMatrix.TryInvert(out this.DeviceToChartMatrix);
}
I am using Eigen to calculate the best fit of a set of points to a plane. What I need to do with this data, is then rotate the set of points so they lie flat, negating the rotation value.
My code is:
cv::Point2f plane_from_points(const std::vector<Vector3> & c)
{
// copy coordinates to matrix in Eigen format
size_t num_atoms = c.size();
Eigen::Matrix< Vector3::Scalar, Eigen::Dynamic, Eigen::Dynamic > coord(3, num_atoms);
for (size_t i = 0; i < num_atoms; ++i) coord.col(i) = c[i];
// calculate centroid
Vector3 centroid(coord.row(0).mean(), coord.row(1).mean(), coord.row(2).mean());
// subtract centroid
coord.row(0).array() -= centroid(0); coord.row(1).array() -= centroid(1); coord.row(2).array() -= centroid(2);
// we only need the left-singular matrix here
// http://math.stackexchange.com/questions/99299/best-fitting-plane-given-a-set-of-points
auto svd = coord.jacobiSvd(Eigen::ComputeThinU | Eigen::ComputeThinV);
Vector3 plane_normal = svd.matrixU().rightCols<1>();
float x = plane_normal[0];
float y = plane_normal[1];
float z = plane_normal[2];
float angle = atan2(x, z) * 180 / PI;
float angle2 = atan2(y, z) * 180 / PI;
cv::Point ret(angle, angle2);
return ret;
}
Then, in C#, I convert the angle values to a quaternion, to rotate my object:
public static Quaternion QuatFromEuler(double yaw, double pitch, double roll)
{
yaw = Deg2Rad(yaw);
pitch = Deg2Rad(pitch);
roll = Deg2Rad(roll);
double rollOver2 = roll * 0.5f;
double sinRollOver2 = (double)Math.Sin((double)rollOver2);
double cosRollOver2 = (double)Math.Cos((double)rollOver2);
double pitchOver2 = pitch * 0.5f;
double sinPitchOver2 = (double)Math.Sin((double)pitchOver2);
double cosPitchOver2 = (double)Math.Cos((double)pitchOver2);
double yawOver2 = yaw * 0.5f;
double sinYawOver2 = (double)Math.Sin((double)yawOver2);
double cosYawOver2 = (double)Math.Cos((double)yawOver2);
Quaternion result = new Quaternion();
result.W = cosYawOver2 * cosPitchOver2 * cosRollOver2 + sinYawOver2 * sinPitchOver2 * sinRollOver2;
result.X = cosYawOver2 * sinPitchOver2 * cosRollOver2 + sinYawOver2 * cosPitchOver2 * sinRollOver2;
result.Y = sinYawOver2 * cosPitchOver2 * cosRollOver2 - cosYawOver2 * sinPitchOver2 * sinRollOver2;
result.Z = cosYawOver2 * cosPitchOver2 * sinRollOver2 - sinYawOver2 * sinPitchOver2 * cosRollOver2;
return result;
}
This gives me:
angles: -177 -126
quat: -0.453834928533952,-0.890701198505913,-0.0233238317256566,0.0118840858439476
Which, when i apply it, looks nothing like it should. (I expect a roughly 45 degree rotation in one axis, I get a 180 degree flip)
I have tried switching the axes to check for coordinate space mismatch(which is likely), but I cannot get this to work. Am I doing something wrong?
I have checked the 3d points that i pass into the algorithm, and they are correct, so my issue is either in the point-to-plane code, or the quaternion conversion.
Any help would be much appreciated. Thank you.
If you want to calculate the quaternion which rotates one plane to another, simply compute the quaternion that rotates the normal to the other:
#include <Eigen/Geometry>
int main() {
using namespace Eigen;
// replace this by your actual plane normal:
Vector3d plane_normal = Vector3d::Random().normalized();
// Quaternion which rotates plane_normal to UnitZ, or the plane to the XY-plane:
Quaterniond rotQ = Quaterniond::FromTwoVectors(plane_normal, Vector3d::UnitZ());
std::cout << "Random plane_normal: " << plane_normal.transpose() << '\n';
std::cout << "rotated plane_normal: " << (rotQ * plane_normal).transpose() << '\n';
}
Also, don't store your angles in degrees, ever (it may sometimes make sense to output them in degrees ...).
And more importantly: Stop using Euler Angles!
I am using the following to create a circle using VertexPositionTexture:
public static ObjectData Circle(Vector2 origin, float radius, int slices)
{
/// See below
}
The texture that is applied to it doesn't look right, it spirals out from the center. I have tried some other things but nothing does it how I want. I would like for it to kind-of just fan around the circle, or start in the top-left end finish in the bottom-right. Basically wanting it to be easier to create textures for it.
I know that are MUCH easier ways to do this without using meshes, but that is not what I am trying to accomplish right now.
This is the code that ended up working thanks to Pinckerman:
public static ObjectData Circle(Vector2 origin, float radius, int slices)
{
VertexPositionTexture[] vertices = new VertexPositionTexture[slices + 2];
int[] indices = new int[slices * 3];
float x = origin.X;
float y = origin.Y;
float deltaRad = MathHelper.ToRadians(360) / slices;
float delta = 0;
float thetaInc = (((float)Math.PI * 2) / vertices.Length);
vertices[0] = new VertexPositionTexture(new Vector3(x, y, 0), new Vector2(.5f, .5f));
float sliceSize = 1f / slices;
for (int i = 1; i < slices + 2; i++)
{
float newX = (float)Math.Cos(delta) * radius + x;
float newY = (float)Math.Sin(delta) * radius + y;
float textX = 0.5f + ((radius * (float)Math.Cos(delta)) / (radius * 2));
float textY = 0.5f + ((radius * (float)Math.Sin(delta)) /(radius * 2));
vertices[i] = new VertexPositionTexture(new Vector3(newX, newY, 0), new Vector2(textX, textY));
delta += deltaRad;
}
indices[0] = 0;
indices[1] = 1;
for (int i = 0; i < slices; i++)
{
indices[3 * i] = 0;
indices[(3 * i) + 1] = i + 1;
indices[(3 * i) + 2] = i + 2;
}
ObjectData thisData = new ObjectData()
{
Vertices = vertices,
Indices = indices
};
return thisData;
}
public static ObjectData Ellipse()
{
ObjectData thisData = new ObjectData()
{
};
return thisData;
}
ObjectData is just a structure that contains an array of vertices & an array of indices.
Hope this helps others that may be trying to accomplish something similar.
It looks like a spiral because you've set the upper-left point for the texture Vector2(0,0) in the center of your "circle" and it's wrong. You need to set it on the top-left vertex of the top-left slice of you circle, because 0,0 of your UV map is the upper left corner of your texture.
I think you need to set (0.5, 0) for the upper vertex, (1, 0.5) for the right, (0.5, 1) for the lower and (0, 0.5) for the left, or something like this, and for the others use some trigonometry.
The center of your circle has to be Vector2(0.5, 0.5).
Regarding the trigonometry, I think you should do something like this.
The center of your circle has UV value of Vector2(0.5, 0.5), and for the others (supposing the second point of the sequence is just right to the center, having UV value of Vector2(1, 0.5)) try something like this:
vertices[i] = new VertexPositionTexture(new Vector3(newX, newY, 0), new Vector2(0.5f + radius * (float)Math.Cos(delta), 0.5f - radius * (float)Math.Sin(delta)));
I've just edited your third line in the for-loop. This should give you the UV coordinates you need for each point. I hope so.
I've been trying to combine a couple of riemers tutorials to make a terrain that is textured and lit. I'm almost there but I can't get the application of the texture right. I believe the problem is in SetUpVertices() with the setting of the texture coordinates. I know currently the code reads that they're all set to (0, 0) and I need to have it so that they are set to the corners of the texture but I can't seem to get the code right. Anybody out there able to assist?
private void SetUpVertices()
{
vertices = new VertexPositionNormalTexture[terrainWidth * terrainHeight];
for (int x = 0; x < terrainWidth; x++)
{
for (int y = 0; y < terrainHeight; y++)
{
vertices[x + y * terrainWidth].Position = new Vector3(x, -y, heightData[x, y]);
vertices[x + y * terrainWidth].TextureCoordinate.X = 0;
vertices[x + y * terrainWidth].TextureCoordinate.Y = 0;
}
}
}
I've added the full code of Game1.cs to this pastie http://pastebin.com/REd8QDZA
You can stretch the texture across the surface by interpolating from 0 to 1:
vertices[x + y * terrainWidth].TextureCoordinate.X = x / (terrainWidth - 1.0);
vertices[x + y * terrainWidth].TextureCoordinate.Y = y / (terrainHeight - 1.0);
I have some problems with collision. I want to ge coords of a sprite that can be rotated scaled or whatever. It's similiar to Riemers guide, but he's getting a collision of two sprites and I only need those points where alpha is zero.
Better see source:
public Color[,] TextureTo2DArray(Texture2D texture) // to get color array
{
Color[] colors1D = new Color[texture.Width * texture.Height];
texture.GetData(colors1D);
Color[,] colors2D = new Color[texture.Width, texture.Height];
for (int x = 0; x < texture.Width; x++)
for (int y = 0; y < texture.Height; y++)
colors2D[x, y] = colors1D[x + y * texture.Width];
return colors2D;
}
With color is pretty easy, but here is the part where I get points:
public Vector2 TexturePos(Color[,] Color, Matrix matrix)
{
int width1 = Color.GetLength(0);
int height1 = Color.GetLength(1);
for (int x = 0; x < width1; x++)
{
for (int y = 0; y < height1; y++)
{
Vector2 pos1 = new Vector2(x, y);
if (Color[x, y].A > 0)
{
Vector2 screenPos = Vector2.Transform(pos1, matrix);
return screenPos;
}
}
}
return new Vector2(-1, -1);
}
And for matrix I'm using this:
Matrix matrix =
Matrix.CreateTranslation(new Vector3(origin, 0)) *
Matrix.CreateRotationZ(MathHelper.ToRadians(rotation))*
Matrix.CreateScale(scale) *
Matrix.CreateTranslation(new Vector3(pos, 0));
Sprite is rectangular but i get circular movement: I'm rotating it (rotation += 0,5), adding gravity and making it collide with some y value:
Pos.Y += 5;
if (Position.Y >= 200)
BoxPos.Y -= 5;
And I get that it rotates as a circle colliding a line, but not as a rectangle.
Is this normal? Maybe I need some fixes in source?
"That method is supposed to get a position of a pixel (in sprite) that is not transperent but is rotated, scaled (depending on sprite)."
You need to have a look at this:
http://create.msdn.com/en-US/education/catalog/tutorial/collision_2d_perpixel_transformed
This is a great article about 2D collisions in XNA and has an example method that performs 2D collision detection for a Scaled & Rotated set of sprites.