Drawing Parallel Lines in C# - c#

Sorry if this has been asked before, but I couldn't find a valid response or a response I could understand
Currently I have a code that draws a line from a Joint to Joint in the Kinect, this forms the Bone:
drawingContext.DrawLine(drawPen, jointPoints[jointType0], jointPoints[jointType1]);
In the picture aboe it shows Parallel lines joining from circle to cirlce, Can someone please explain to me or show me to create these lines?

If you have a line from point p0 to point p1, and want to create a parallel line / offset line, you need to use either trigonometry or vector math.
To do it using vector math:
Find the direction vector of the line
Find a vector perpendicular to that
Use the perpendicular vector to offset p0 and p1
Pseudo code:
Vector vLine = ( p1 - p0 ).Normalized();
Vector vPerp = new Vector( -vLine.Y, vLine.X );
Point newp0 = p0 + vPerp * offset distance
Point newp1 = p1 + vPerp * offset distance
DrawLine( newp0, newp1 );
Reverse the offset or negate vPerp to get a line on the other side. How you do this depends on what you have available. System.Windows.Vector and System.Windows.Point should work fine since you're using WPF.
If you are interested in what's going on here in more detail, get your googling on and search for vector math or linear algebra.

Related

How to determine an 360° angle from relative direction?

I have a minimap on the screen and having problems to determine the angle to a target relative to the camera.
Here is some drawing of what I mean with some examples of camera position and direction:
The black triangles represent the camera.
The black arrows define their forward direction.
The blue arrows are the direction to the target (= red dot in the middle) from the camera.
The circles in the specific cameras define the wanted orientation of its red dot.
Here's my code so far:
//Anchored position around minimap circle
void CalculateScreenPos(){
Vector3 dir = transform.position - cam.position; //xz distance
dir.y = 0;
angle = Angle360(cam.forward.normalized, dir.normalized, cam.right);
Vector2 desiredPosition = new Vector2(
eX * -Mathf.Cos(angle * Mathf.PI/180f) * dir.z,
eY * Mathf.Sin(angle * Mathf.PI/180f) * dir.x
);
minimapBlip.anchoredPosition = desiredPosition;
}
public static float Angle360(Vector3 from, Vector3 to, Vector3 right)
{
float angle = Vector3.Angle(from, to);
return (Vector3.Angle(right, to) > 90f) ? 360f - angle : angle;
}
But the angle seems not working properly, found out that it ranges from
0° + cam.eulerXAngles.x to 360° - cam.eulerAngles.x
So it works when the cam is never looking to the ground or sky.
How do I get rid of the unwanted added x-eulerAngles by not substracting/adding it again to the angle?
angle -= cam.transform.eulerAngles.x
is a bad choice as when the result of Angle360 is 360, it gets substracted again, leading immediatly to a wrong angle.
Also the circle can be an ellipsoid, that's why I have put eX and eY in the desired position that determine the extends of the ellipse.
I don't know what data type you are using e.g. cam.position, cam.heading, etc. but some general guidance that will help you debug this problem.
Write a unit test. Prepare some canned data, both for input (e.g. set cam.position/cam.heading, transform, etc.) and output (the expected angle). Do this for a few cases, e.g. all the six examples you've shown. This makes it easier to repeat your test, and understand which case isn't working - you might see a pattern. It also makes it easy to run your code through a debugger.
Break your functions into logical units of work. What does Angle360 do? I guess you understand what it is supposed to do, but I think it is really two functions
Get the angle between the two vectors (current direction and target direction)
Rotate map (or something like that)
Write tests for those broken out functions. You're just using some library angle difference function - is it behaving as you expect?
Name your variables. You have a vector called right - what is that? There's no comment. Is it right as in 'correct', or as in 'opposite of left'? Why is it Vector3.Angle(right, to), and why not Vector3.Angle(to, right)?
Generally you are performing some math and getting tripped up because your code is not clear, things are not well named, and the approach is not clear. Break the problem into smaller pieces and the issue will become obvious.
I solved it:
angle = Mathf.Atan2(heading.x, heading.z) * Mathf.Rad2Deg-Camera.main.transform.eulerAngles.y;
Vector2 desiredPosition = new Vector2(
eX * -Mathf.Cos((angle+90) * Mathf.Deg2Rad),
eY * Mathf.Sin((angle+90) * Mathf.Deg2Rad)
);
Thanks for the help so far and happy coding :D

Bouncing a ball off a wall with arbitrary angle?

I'm trying to let the user draw a paddle that they can then use to hit a ball. However, I cannot seem to get the ball to bounce correctly because the x and y components of the ball's velocity are not lined up with the wall. How can I get around this?
I tried the advice given by Gareth Rees here, but apparently I don't know enough about vectors to be able to follow it. For example, I don't know what exactly you store in a vector - I know it's a value with direction, but do you store the 2 points it's between, the slope, the angle?
What I really need is given the angle of the wall and the x and y velocities as the ball hits, to find the new x and y velocities afterwards.
Gareth Rees got the formula correct, but I find the pictures and explanation here a little more clear. That is, the basic formula is:
Vnew = -2*(V dot N)*N + V
where
V = Incoming Velocity Vector
N = The Normal Vector of the wall
Since you're not familiar with vector notation, here's what you need to know for this formula: Vectors are basically just x,y pairs, so V = (v.x, v.y) and N = (n.x, n.y). Planes are best described by the normal to the plane, that is a vector of unit length that is perpendicular to the plane. Then a few formula, b*V = (b*v.x, b*v.y); V dot N = v.x*n.x+v.y*n.y, that is, it's a scalar; and A + B = (a.x+b.x, a.y+b.y). Finally, to find a unit vector based on an arbitrary vector, it's N = M/sqrt(M dot M).
If the surface is curved, use the normal at the point of contact.

Basic render 3D perspective projection onto 2D screen with camera (without opengl)

Let's say I have a data structure like the following:
Camera {
double x, y, z
/** ideally the camera angle is positioned to aim at the 0,0,0 point */
double angleX, angleY, angleZ;
}
SomePointIn3DSpace {
double x, y, z
}
ScreenData {
/** Convert from some point 3d space to 2d space, end up with x, y */
int x_screenPositionOfPt, y_screenPositionOfPt
double zFar = 100;
int width=640, height=480
}
...
Without screen clipping or much of anything else, how would I calculate the screen x,y position of some point given some 3d point in space. I want to project that 3d point onto the 2d screen.
Camera.x = 0
Camera.y = 10;
Camera.z = -10;
/** ideally, I want the camera to point at the ground at 3d space 0,0,0 */
Camera.angleX = ???;
Camera.angleY = ????
Camera.angleZ = ????;
SomePointIn3DSpace.x = 5;
SomePointIn3DSpace.y = 5;
SomePointIn3DSpace.z = 5;
ScreenData.x and y is the screen x position of the 3d point in space. How do I calculate those values?
I could possibly use the equations found here, but I don't understand how the screen width/height comes into play. Also, I don't understand in the wiki entry what is the viewer's position vers the camera position.
http://en.wikipedia.org/wiki/3D_projection
The 'way it's done' is to use homogenous transformations and coordinates. You take a point in space and:
Position it relative to the camera using the model matrix.
Project it either orthographically or in perspective using the projection matrix.
Apply the viewport trnasformation to place it on the screen.
This gets pretty vague, but I'll try and cover the important bits and leave some of it to you. I assume you understand the basics of matrix math :).
Homogenous Vectors, Points, Transformations
In 3D, a homogenous point would be a column matrix of the form [x, y, z, 1]. The final component is 'w', a scaling factor, which for vectors is 0: this has the effect that you can't translate vectors, which is mathematically correct. We won't go there, we're talking points.
Homogenous transformations are 4x4 matrices, used because they allow translation to be represented as a matrix multiplication, rather than an addition, which is nice and quick for your videocard. Also convenient because we can represent successive transformations by multiplying them together. We apply transformations to points by performing transformation * point.
There are 3 primary homogeneous transformations:
Translation,
Rotation, and
Scaling.
There are others, notably the 'look at' transformation, which are worth exploring. However, I just wanted to give a brief list and a few links. Successive application of moving, scaling and rotating applied to points is collectively the model transformation matrix, and places them in the scene, relative to the camera. It's important to realise what we're doing is akin to moving objects around the camera, not the other way around.
Orthographic and Perspective
To transform from world coordinates into screen coordinates, you would first use a projection matrix, which commonly, come in two flavors:
Orthographic, commonly used for 2D and CAD.
Perspective, good for games and 3D environments.
An orthographic projection matrix is constructed as follows:
Where parameters include:
Top: The Y coordinate of the top edge of visible space.
Bottom: The Y coordinate of the bottom edge of the visible space.
Left: The X coordinate of the left edge of the visible space.
Right: The X coordinate of the right edge of the visible space.
I think that's pretty simple. What you establish is an area of space that is going to appear on the screen, which you can clip against. It's simple here, because the area of space visible is a rectangle. Clipping in perspective is more complicated because the area which appears on screen or the viewing volume, is a frustrum.
If you're having a hard time with the wikipedia on perspective projection, Here's the code to build a suitable matrix, courtesy of geeks3D
void BuildPerspProjMat(float *m, float fov, float aspect,
float znear, float zfar)
{
float xymax = znear * tan(fov * PI_OVER_360);
float ymin = -xymax;
float xmin = -xymax;
float width = xymax - xmin;
float height = xymax - ymin;
float depth = zfar - znear;
float q = -(zfar + znear) / depth;
float qn = -2 * (zfar * znear) / depth;
float w = 2 * znear / width;
w = w / aspect;
float h = 2 * znear / height;
m[0] = w;
m[1] = 0;
m[2] = 0;
m[3] = 0;
m[4] = 0;
m[5] = h;
m[6] = 0;
m[7] = 0;
m[8] = 0;
m[9] = 0;
m[10] = q;
m[11] = -1;
m[12] = 0;
m[13] = 0;
m[14] = qn;
m[15] = 0;
}
Variables are:
fov: Field of view, pi/4 radians is a good value.
aspect: Ratio of height to width.
znear, zfar: used for clipping, I'll ignore these.
and the matrix generated is column major, indexed as follows in the above code:
0 4 8 12
1 5 9 13
2 6 10 14
3 7 11 15
Viewport Transformation, Screen Coordinates
Both of these transformations require another matrix matrix to put things in screen coordinates, called the viewport transformation. That's described here, I won't cover it (it's dead simple).
Thus, for a point p, we would:
Perform model transformation matrix * p, resulting in pm.
Perform projection matrix * pm, resulting in pp.
Clipping pp against the viewing volume.
Perform viewport transformation matrix * pp, resulting is ps: point on screen.
Summary
I hope that covers most of it. There are holes in the above and it's vague in places, post any questions below. This subject is usually worthy of a whole chapter in a textbook, I've done my best to distill the process, hopefully to your advantage!
I linked to this above, but I strongly suggest you read this, and download the binary. It's an excellent tool to further your understanding of theses transformations and how it gets points on the screen:
http://www.songho.ca/opengl/gl_transform.html
As far as actual work, you'll need to implement a 4x4 matrix class for homogeneous transformations as well as a homogeneous point class you can multiply against it to apply transformations (remember, [x, y, z, 1]). You'll need to generate the transformations as described above and in the links. It's not all that difficult once you understand the procedure. Best of luck :).
#BerlinBrown just as a general comment, you ought not to store your camera rotation as X,Y,Z angles, as this can lead to an ambiguity.
For instance, x=60degrees is the same as -300 degrees. When using x,y and z the number of ambiguous possibilities are very high.
Instead, try using two points in 3D space, x1,y1,z1 for camera location and x2,y2,z2 for camera "target". The angles can be backward computed to/from the location/target but in my opinion this is not recommended. Using a camera location/target allows you to construct a "LookAt" vector which is a unit vector in the direction of the camera (v'). From this you can also construct a LookAt matrix which is a 4x4 matrix used to project objects in 3D space to pixels in 2D space.
Please see this related question, where I discuss how to compute a vector R, which is in the plane orthogonal to the camera.
Given a vector of your camera to target, v = xi, yj, zk
Normalise the vector, v' = xi, yj, zk / sqrt(xi^2 + yj^2 + zk^2)
Let U = global world up vector u = 0, 0, 1
Then we can compute R = Horizontal Vector that is parallel to the camera's view direction R = v' ^ U,
where ^ is the cross product, given by
a ^ b = (a2b3 - a3b2)i + (a3b1 - a1b3)j + (a1b2 - a2b1)k
This will give you a vector that looks like this.
This could be of use for your question, as once you have the LookAt Vector v', the orthogonal vector R you can start to project from the point in 3D space onto the camera's plane.
Basically all these 3D manipulation problems boil down to transforming a point in world space to local space, where the local x,y,z axes are in orientation with the camera. Does that make sense? So if you have a point, Q=x,y,z and you know R and v' (camera axes) then you can project it to the "screen" using simple vector manipulations. The angles involved can be found out using the dot product operator on Vectors.
Following the wikipedia, first calculate "d":
http://upload.wikimedia.org/wikipedia/en/math/6/0/b/60b64ec331ba2493a2b93e8829e864b6.png
In order to do this, build up those matrices in your code. The mappings from your examples to their variables:
θ = Camera.angle*
a = SomePointIn3DSpace
c = Camera.x | y | z
Or, just do the equations separately without using matrices, your choice:
http://upload.wikimedia.org/wikipedia/en/math/1/c/8/1c89722619b756d05adb4ea38ee6f62b.png
Now we calculate "b", a 2D point:
http://upload.wikimedia.org/wikipedia/en/math/2/5/6/256a0e12b8e6cc7cd71fa9495c0c3668.png
In this case ex and ey are the viewer's position, I believe in most graphics systems half the screen size (0.5) is used to make (0, 0) the center of the screen by default, but you could use any value (play around). ez is where the field of view comes into play. That's the one thing you were missing. Choose a fov angle and calculate ez as:
ez = 1 / tan(fov / 2)
Finally, to get bx and by to actual pixels, you have to scale by a factor related to the screen size. For example, if b maps from (0, 0) to (1, 1) you could just scale x by 1920 and y by 1080 for a 1920 x 1080 display. That way any screen size will show the same thing. There are of course many other factors involved in an actual 3D graphics system but this is the basic version.
Converting points in 3D-space into a 2D point on a screen is simply made by using a matrix. Use a matrix to calculate the screen position of your point, this saves you a lot of work.
When working with cameras you should consider using a look-at-matrix and multiply the look at matrix with your projection matrix.
Assuming the camera is at (0, 0, 0) and pointed straight ahead, the equations would be:
ScreenData.x = SomePointIn3DSpace.x / SomePointIn3DSpace.z * constant;
ScreenData.y = SomePointIn3DSpace.y / SomePointIn3DSpace.z * constant;
where "constant" is some positive value. Setting it to the screen width in pixels usually gives good results. If you set it higher then the scene will look more "zoomed-in", and vice-versa.
If you want the camera to be at a different position or angle, then you will need to move and rotate the scene so that the camera is at (0, 0, 0) and pointed straight ahead, and then you can use the equations above.
You are basically computing the point of intersection between a line that goes through the camera and the 3D point, and a vertical plane that is floating a little bit in front of the camera.
You might be interested in just seeing how GLUT does it behind the scenes. All of these methods have similar documentation that shows the math that goes into them.
The three first lectures from UCSD might be very helful, and contain several illustrations on this topic, which as far as I can see is what you are really after.
Run it thru a ray tracer:
Ray Tracer in C# - Some of the objects he has will look familiar to you ;-)
And just for kicks a LINQ version.
I'm not sure what the greater purpose of your app is (you should tell us, it might spark better ideas), but while it is clear that projection and ray tracing are different problem sets, they have a ton of overlap.
If your app is just trying to draw the entire scene, this would be great.
Solving problem #1: Obscured points won't be projected.
Solution: Though I didn't see anything about opacity or transparency on the blog page, you could probably add these properties and code to process one ray that bounced off (as normal) and one that continued on (for the 'transparency').
Solving problem #2: Projecting a single pixel will require a costly full-image tracing of all pixels.
Obviously if you just want to draw the objects, use the ray tracer for what it's for! But if you want to look up thousands of pixels in the image, from random parts of random objects (why?), doing a full ray-trace for each request would be a huge performance dog.
Fortunately, with more tweaking of his code, you might be able to do one ray-tracing up front (with transparancy), and cache the results until the objects change.
If you're not familiar to ray tracing, read the blog entry - I think it explains how things really work backwards from each 2D pixel, to the objects, then the lights, which determines the pixel value.
You can add code so as intersections with objects are made, you are building lists indexed by intersected points of the objects, with the item being the current 2d pixel being traced.
Then when you want to project a point, go to that object's list, find the nearest point to the one you want to project, and look up the 2d pixel you care about. The math would be far more minimal than the equations in your articles. Unfortunately, using for example a dictionary of your object+point structure mapping to 2d pixels, I am not sure how to find the closest point on an object without running through the entire list of mapped points. Although that wouldn't be the slowest thing in the world and you could probably figure it out, I just don't have the time to think about it. Anyone?
good luck!
"Also, I don't understand in the wiki entry what is the viewer's position vers the camera position" ... I'm 99% sure this is the same thing.
You want to transform your scene with a matrix similar to OpenGL's gluLookAt and then calculate the projection using a projection matrix similar to OpenGL's gluPerspective.
You could try to just calculate the matrices and do the multiplication in software.

Diagnosing backface culling issues

The setup: I'm using a cubemap projection to create a planet out of blocks. A cubemap projection is quite simple: take the vector from the center of a cube to any point on that cube, normalize it, then multiply that by the radius of a sphere and you have your coordinate's new position. Here's a quick illustration in 2D:
[link]
Now, as I said, I've created this so that it's made of blocks. So in practice, I divide my cube into equal square subdivisions (like a rubik's cube). I use a custom coordinate: (Face, X, Y, Shell). Face refers to which face on the cube the point is on. X and Y refer to its position on the face. Shell refers to its 'height'. In practice this translates into the radius of the sphere I project the point onto. If I haven't explained it well, hopefully an image will help:
[link]
--That's a planet generated with an entirely random heightmap, with backface culling turned off. Anyways, now that you have the idea of what I'm working with--
My problem is that I cannot get backface culling to work predictably. My current system works as follows:
Calculate the center of the block
Get the normal of the vertices on each triangle of the block by taking the cross product of two sides of the triangle
Get the vector from the center of the triangle (the average of the triangle's vertices) to the center of the block, normalize it.
Take the dot product of the normal of the triangle and the normal to the center of the block
If the dot product is >= 0, flip the first and last indices of the triangle
Here's that in code:
public bool CheckIndices(Quad q, Vector3 centerOfBlock)
{
Vector3[] vertices = new Vector3[3];
for (int v = 0; v < 3; v++)
vertices[v] = q.Corners[indices[v]].Position;
Vector3 center = (vertices[0] + vertices[1] + vertices[2]) / 3f;
Vector3 normal = Vector3.Cross(vertices[1] - vertices[0], vertices[2] - vertices[0]);
Vector3 position = center - centerOfBlock;
position.Normalize();
normal.Normalize();
float dotProduct = Vector3.Dot(position, normal);
if (dotProduct >= 0)
{
int swap = indices[0];
indices[0] = indices[2];
indices[2] = swap;
return false;
}
return true;
}
I use a Quad class to hold triangles and some other data. Triangles store an int[3] for indices which correspond to the vertices stored in Quad.
However, when I use this method, at least half of the faces are drawn in the wrong direction. I have noticed two patterns in the problem:
Faces which point outward from the center of the planet are always correct
Faces which point inward toward the center of the planet are always incorrect
This led me to believe that my calculated center of the block was incorrect and in fact somewhere between the block and the center of the planet. However, changing my calculations for the center of the block was ineffective.
I have used two different methods to calculate the center of the block. The first was to find the projected position of a coordinate which had +.5 X, +.5 Y, and +.5 Shell (Z) from the block's position. Because I define block position using the bottom-left-back corner, this new coordinate would naturally be in the center of the block. The other method I use is to calculate the real position of each corner of the block and then average these vectors. This method seemed pretty foolproof to me, yet it did not succeed.
For this reason I am beginning to doubt the code I pasted above which determines if a triangle must be flipped. I do not remember all of the reasoning behind some of the logic, specifically behind the >= 0 statement. I just need another pair of eyes: is something wrong here?
The problem was that I was being too general in my cubemap projection when I got the position of an arbitrary point on a cube. Compare the GetCorrectedCubePosition method here to the same method here to see the improvements made. The methods for clockwise index order checking I noted in my post should are still unknown in effectiveness, as I won't be using them anymore. Using a correct projection means I can hard-define my vertices as clockwise in the generation methods themselves instead of having to guess.

projection of point on line in 3D

I have point with x,y,z
and line direction x,y,z
how to get the point projection on this line
I tried this code
http://www.zshare.net/download/93560594d8f74429/
for example when use the function intersection in the code I got
the line direction is (1,0,0) and the point (2,3,3) will have projection (value in x , 0, 0 ) and this is wrong value
any suggestion
Best regards
You want to project the vector (x,y,z) on the line with direction (a,b,c).
If (a,b,c) is a unit vector then the result is just (x,y,z).(a,b,c) (a,b,c) = (ax+by+cz)(a,b,c)
If it's not a unit vector make it one, divising it by its norm.
EDIT : a little bit of theory:
Let E be your vectorial space of dimension N:
let F be the line directed by vector a. The hyperplan orthogonal to F is :
Now let's chose a vector x in E, x can be writen as :
where xF is the coordinate of x in the direction of F, an x orthogonal is the coordinate on the orthogonal hyperplan.
You want to find xF: (it's exactly the same formula as the one I wrote above)
You should have a close look at the wikipedia article on orthogonal projections and try to find more stuff on the web .
You can generalise that to any F, if it's not a line anymore but a plan then take F orthogonal and decompose x the same way...etc.
This topic is clearly old and I think the original poster meant vector not line. But for the purposes of Google:
A line, unlike a vector does not (necessarily) have its origin at (0,0,0). So cannot be described just by a direction, it also needs an origin. This is the zero point of the line; the line can extend beyond and before this point, but when you say you’re zero meters along the line this is where you mean.
So to get the projection of a point onto a line you first need to convert the point into the local co-ordinate frame, which you do by subtracting the origin from the point (e.g. if a fence post is the ‘line’ you go from GPS co-ordinates to ‘5 metres to the north and a meter above the bottom of the fence post‘). Now in this local co-ordinate frame the line is just a vector, so we can get the projection of the point using the normal dot product approach.
pointLocalFrame = point– origin
projection = dotProduct(lineDirection, pointLocalFrame)
NOTE: this assumes the line is infinite in length, if the projection is greater than the actual line length then there is no projection
NOTE: lineDirection must be normalised; i.e. its length must be 1
NB: dot product of two vectors (x1,y1,z1) and (x2,y2,z2) is x1*x2+y1*y2+z1*z2

Categories

Resources