Obtain realworld (X, Y, Z) coordinates from Xbox One Kinect - c#

I'm using an Xbox One Kinect to track an object, I'm currently able to obtain the objects Z coordinate (distance from the Kinect in mm) using the depth image. The goal is to be able to also obtain the realworld X and Y coordinates in mm as well. How would I go about that?
I've used the math from this answer, with the Xbox One Kinects FOV #'s, but it doesn't come out right.

Now only with the Depth stream you cannot do it. You need to identify the edges using the gray scale image. The same object has the same gray color value so that just by getting the color change you can pin point the edges of the objects. Like this you can calculate the edges and then take x, y points of that object.

Related

Scale a plane in any of x or z axis using random function in time intervals

I am quite new to Unity3D, and I expect some help here!
I am working out on a project where I have created a 3D Object>Plane. What I want is the plane should extrude or scale towards a specific axis (x or z) provided randomly using InvokeRepeating function.
Reference Image :
The problem I am facing is how can i turn the plane on an axis and scale in into that direction.

(Computer graphic) Getting mouse coordinate after translate and rotate the world matrix

Hi is there any way to get the X,Y,Z of mouse in direct3d after I translate and rotate the world matrix?
The mouse doesn't have a Z coordinate because it's not a three-dimensional pointing device.
The best you can do is project the mouse's (x,y) coordinate on the screen through the viewing frustum to determine which portion of the viewing frustum correlates to the pixel position under the mouse cursor.
DirectX is completely unaware of mouse and any other input devices. It just is not what it cares about.
To get x and y coordinates you call Win32 API functions (this depends on framework you are using)
To get a z coordinate, you must implement Ray Picking. There is no uniform way, as this depends on how picked objects are implemented. Here are some tutorials on XNA Picking.

Get the bounds of the plane visible at a specific z coordinate

Using OpenTK, I've created a window (800x600) with a vertical FOV of 90°.
I want to make a 2D game with a background image that fits on the whole screen.
What I want is the plane at a variable z coordinate as a RectangleF.
Currently my code is:
var y = (float)(Math.Tan(Math.PI / 4) * z);
return new RectangleF(aspectRatio * -y, -y, 2 * aspectRatio * y, 2 * y);
The rectangle calculated by this is always a little to small, this effect seems to decrease with z increasing.
Hoping someone will find my mistake.
I want to make a 2D game with a background image that fits on the whole screen.
Then don't bother with perspective calculations. Just switch to an orthographic projection for drawing the background, disabling depth writes. Then switch to a perspective projection for the rest.
OpenGL is not a scene graph, it's a statefull drawing API. Make use of that fact.
To make a 2D game using OpenGL, you should use an orthographic projection, like this tutorial shows.
Then its simple to fill the screen with whatever image you want because you aren't dealing with perspective.
However, IF you were to insist on doing things the way you say, then you'd have to gluProject the 4 corners of your screen using the current modelview matrix and then draw a quad in 3D space with those corners. Even with this method, it is likely that the quad might not cover the entire screen sometimes due to floating point errors.

How to convert a 3D acceleration into a rotation?

I have a program which utilizes an accelerometer in the Windows Phone 7, and I need to detect what the rotation of the device is. I have X, Y, Z accelerations, and need to somehow figure out the orientation of the phone based off of that. How can this be accomplished? (Rotation values should be in Degrees)
Although I am working on iPhone it should basically the same problem. Your hardware needs a gyroscope sensor to describe rotations, especially those in parallel to gravity (let's call this z, x is right and y is up). If the device lays flat on the table and you rotate around this z-axis, there are only tiny accelerations measured resulting from centrifugal forces. So you can get some information about rotation, but you are limited in:
1) Users have to hold the device in specific manner for you to detect the rotation properly
2) Even if you got the best case of 45 degree to ground, it is very hard to get all 3 dimensions. You are better off, if you can limit detection on 2 rotational directions only.
3) You are limited to either rotations or translations, but combining detection of rotations with linear motions simultaneously is pretty hard.
Conclusion: For a racing game force users to hold the device in certain angle, limit on z-Rotation for steering wheel and some other direction for e.g. power slides or whatever.
Use of axis can be quite confusing. I stay with the orientation of X for horizontal axis (left and right), Y for vertical axis (up and down) and Z axis is the depth(far and near).
Using the accelerometer, you can only detect rotation about the X axis and Z axis, but not the Y axis.
Suppose your phone is place flat at rest position, the force of gravity will result in the Y acceleration to be around -9.8, and the X and Z acceleration will be around 0.
Assume that phone remains flat in the position. When you rotate the phone about the Y axis (assuming there is no translation to the phone or change in position to the phone as you rotate), there is no significant change to the value of X, Y and Z acceleration. Therefore, you can't detect any rotation about the Y axis.
When you rotate about X and Z axis (assuming no change in position of the phone while rotating), all the 3 acceleration values changes, but vectors will have the characteristic of x^2 + y^2 + z^2 = 9.8^2.
You can use simple trigonometrical formula to determine the rotation about the Z and Z axis.
As pointed out by Kay, you will still need the gyroscope to output the angular velocity of the rotation about each axis to compute the rotation about the y axis.
If you want to get the rotation angle of the phone held in your hands (ie rotated in one plane) let's say held facing your chest...
atan2(y accel., x accel.)
You'll get rotational values :) It's likely to be jittery so you'll probably want to average the results over a sample period to smooth it out.

DirectX Z order

I have to draw a map with Managed DirectX. The map arrived in MapInfo format (lines, polylines, regions/polygons). Polygons are already triangulated (done with GLUtesselator).
The idea:
GPS Coordinates are converted to x,y points (Mercator Projection)
I use PositionColored VertexFormat
Center of the view is [x,y] (can modify by mouse move)
Camera is always positioned to [x,y,z] where z is the zoom (-100 by default, can modify by mouse wheel)
Camera target is: [x,y,0], camera up: [0,1,0]
The layers of the map are positioned by Z (+1.0, 0.99, 0.98, 0.97...etc)
I can already do:
Draw lines and polylines
Draw one layer of polygons
My problem is: when I want to draw all layers I see only one of them. I think there is some problem with z ordering. What should I do to solve this? Modify the RenderState? The best would be if I could draw as in GDI (first in the back, last in the front).
Other question: how can I get the coordinate of a pixel under the mouse cursor? (in the GDI version of the map I could do it because I used my own viewport for rendering, but now directx do everything)
Thanks!
If your map is purely 2D, make sure that Z buffering is turned off. Once it is, things will display in the order you draw them in.

Categories

Resources