I have a program which utilizes an accelerometer in the Windows Phone 7, and I need to detect what the rotation of the device is. I have X, Y, Z accelerations, and need to somehow figure out the orientation of the phone based off of that. How can this be accomplished? (Rotation values should be in Degrees)
Although I am working on iPhone it should basically the same problem. Your hardware needs a gyroscope sensor to describe rotations, especially those in parallel to gravity (let's call this z, x is right and y is up). If the device lays flat on the table and you rotate around this z-axis, there are only tiny accelerations measured resulting from centrifugal forces. So you can get some information about rotation, but you are limited in:
1) Users have to hold the device in specific manner for you to detect the rotation properly
2) Even if you got the best case of 45 degree to ground, it is very hard to get all 3 dimensions. You are better off, if you can limit detection on 2 rotational directions only.
3) You are limited to either rotations or translations, but combining detection of rotations with linear motions simultaneously is pretty hard.
Conclusion: For a racing game force users to hold the device in certain angle, limit on z-Rotation for steering wheel and some other direction for e.g. power slides or whatever.
Use of axis can be quite confusing. I stay with the orientation of X for horizontal axis (left and right), Y for vertical axis (up and down) and Z axis is the depth(far and near).
Using the accelerometer, you can only detect rotation about the X axis and Z axis, but not the Y axis.
Suppose your phone is place flat at rest position, the force of gravity will result in the Y acceleration to be around -9.8, and the X and Z acceleration will be around 0.
Assume that phone remains flat in the position. When you rotate the phone about the Y axis (assuming there is no translation to the phone or change in position to the phone as you rotate), there is no significant change to the value of X, Y and Z acceleration. Therefore, you can't detect any rotation about the Y axis.
When you rotate about X and Z axis (assuming no change in position of the phone while rotating), all the 3 acceleration values changes, but vectors will have the characteristic of x^2 + y^2 + z^2 = 9.8^2.
You can use simple trigonometrical formula to determine the rotation about the Z and Z axis.
As pointed out by Kay, you will still need the gyroscope to output the angular velocity of the rotation about each axis to compute the rotation about the y axis.
If you want to get the rotation angle of the phone held in your hands (ie rotated in one plane) let's say held facing your chest...
atan2(y accel., x accel.)
You'll get rotational values :) It's likely to be jittery so you'll probably want to average the results over a sample period to smooth it out.
Related
I am quite new to Unity3D, and I expect some help here!
I am working out on a project where I have created a 3D Object>Plane. What I want is the plane should extrude or scale towards a specific axis (x or z) provided randomly using InvokeRepeating function.
Reference Image :
The problem I am facing is how can i turn the plane on an axis and scale in into that direction.
I have a Vector 3 of how many blocks in a grid a piece is along each axis. For example if one of these vectors was ( 1, 2, 1 ) it would be 1 block long on the x-axis, 2 blocks long on the y-axis, and one block long on the z-axis. I also have a Vector 3 of angles that denote rotations along each axis. For example if one of these vectors was ( 90, 180, 0 ) the piece would be rotated by 90 degrees around the x-axis, 180 degrees around the y-axis, and 0 degrees around the z-axis. What I can't figure out is how to rotate the dimensions of a piece by its vector of rotation angles so i know what points in space its occupying.
public class Block
{
private Vector3 localOrientation;
private Vector3 dimensions;
public Vector3 GetRotatedDimensions()
{
//your implementation here
}
}
If I understand correctly, there is something fundamentally wrong with your question. There can be no "rotated dimensions". Let's use a rectangle to demonstrate this. (I didn't undestand correctly)
Suppose there's this initial rectangle:
and you rotate it. This is what you get:
Using a single Vector2, you can't differentiate a "rotated x*y rectangle" from a "initial (x')*(y') rectangle". To sufficiently describe the position of a rectangle, you need to keep the size AND the rotation in your block-describing variable.
Is x' and y' what you wanted to know? I doubt it. Oh, you do? Great!
In 3 dimensions, I would define what you're looking for as
The minimum dimensions of a rectangular box that
1. has its faces parallel to the XY, XZ and YZ planes and
2. contains another rectangular box of known dimensions and orientation.
There are possibly more elegant solutions, but I'd brute force it like this:
Make 8 Vector3 objects (one for each vertex of your block),
Rotate all of them around the x-axis.
Rotate them (the new ones you got from "2") around the y-axis.
Rotate them (the new ones you got from "3") around the z-axis.
Find the min and max values of the x, y and z coordinates among all your points.
Your new dimensions would be (x_max-x_min), (y_max-y_min), (z_max-z_min).
I'm not 100% sure about this though, so make sure you verify the results!
The coordinate system in viewport3D in wpf is fixed at the center of the screen with positive X-axis being rightward, positive Y-axis upwards and Z-axis pointing outwards. All the calculations in transform are being made with respect to this transform. My question is, how can I change the position and orientation of this coordinate system in my code dynamically.
The requirement is, I have two wheels rolling on the ground. But when the wheels take a 90 degree turn, the axes get interchanged. So, If I could shift and orient this coordinate system, I can tackle this problem.
I'm using an Xbox One Kinect to track an object, I'm currently able to obtain the objects Z coordinate (distance from the Kinect in mm) using the depth image. The goal is to be able to also obtain the realworld X and Y coordinates in mm as well. How would I go about that?
I've used the math from this answer, with the Xbox One Kinects FOV #'s, but it doesn't come out right.
Now only with the Depth stream you cannot do it. You need to identify the edges using the gray scale image. The same object has the same gray color value so that just by getting the color change you can pin point the edges of the objects. Like this you can calculate the edges and then take x, y points of that object.
I'm making a small game in XNA. I have a camera up in the air 20 pixels on the y axis. Below it I have a grid of tiles that are 100x100. Right now what I'm trying to do is have a 3D object move with the mouse along the X and Z axis of the grid. I'm using viewport.unproject to convert the 2D screen coordinates to 3D ones, but whatever I try it doesn't seem to be quite right. At the moment I have this:
Vector3 V1 = graphicsDevice.Viewport.Unproject(new Vector3(mouse.X, mouse.Y, 0f), camera.Projection, camera.View, camera.World);
If I use this then it moves, but only by a tiny amount. I've tried replacing the Z axis with a 1 and but then it moves a drastic amount (I understand why, just not really sure how to fix it).
I've tried various other methods such as having 2 vectors, 1 with a 0 Z and 1 with a 1 on the Z and then subtracting them/normalizing them but that wasn't it either.
The closest I got was multiplying the result by the amount it's zoomed, but it wasn't perfect and was slightly offsetting and would go crazy whenever I scrolled the screen so I figured that was the wrong approach too.
Any help would be greatly appreciated, thanks.