I want to use a smart Rubik's Cube (GoCube) to control a 3D object in Unity, it has IMU and a Unity plugin is also available. Currently I'm trying to implement the basic logic to orient the 3D object the way the cube is oriented with IMU data, meaning the 3D object should follow the cube's rotations. For this, I tried to copy relevant elements from the original plugin.
Unfortunately, I cannot get the orientation quaternions to behave the right way: Rotating the cube on the X- and Z-axes results in a mirrored result, as I demonstrate in the GIF. What works is the rotation around the Y-axis.
Demo of quaternion rotations
Without the smart cube, there's obviously no working example, so I copied that part of the code I'm using to orient the 3D object:
public GameObject objectToBeSynchronised;
public Quaternion fixedOffset = Quaternion.Euler(90, 90, 0);
void Update()
{
objectToBeSynchronised.transform.rotation = GetCurrentOrientation(); // live-rotate object
}
/// <summary>
/// Function to get current cube orientation as quaternion
/// </summary>
/// <returns>cube orientation as quaternion</returns>
public Quaternion GetCurrentOrientation()
{
Quat q = GoCubeProvider.GetProvider().GetConnectedGoCube().orientation;
Quaternion currentOrientation = new Quaternion();
currentOrientation = ConvertQuatToQuaternion(q) * fixedOffset; // convert from type Particula.Quat to Quaternion and multiply fixed offset from Particula
return currentOrientation;
}
/// <summary>
/// Function to convert quaternion from Particula.Quat to Quaternion type
/// </summary>
/// <param name="quat">quaternion type Particula.Quat from cube orientation</param>
/// <returns>converted Quaternion</returns>
Quaternion ConvertQuatToQuaternion(Quat quat)
{
return new Quaternion(quat.x, quat.y, quat.z, quat.w);
}
The fixedOffset and ConvertQuatToQuaternion() are parts I imitated from the original plugin, the relevant file is CubeViewModel.cs. fixedOffset makes the 3D object stand upright and not lie on the side. Maybe I'm missing part of the original script to make it work - file CubeView.cs could also be interesting.
After reading answers such as here, here and here I played around with the ConvertQuatToQuaternion function - I tried switching the values to e.g. (w, x, y, z) or added minus (-x, y, -z, w) and a lot of other combinations. None made it right. Also tried Quaternion.Inverse on the converted quaternion - no success.
I don't have much knowledge about, let alone experience with quaternions and am a bit lost. I was expecting negating x and z to work, since those are the axes that are flipped. However, there seems to be more to this, maybe or probably I'm missing a step from the original plugin.
For anyone planning on developing with the GoCube and their API: I found a solution! I changed nothing about my code sample except the fixedOffset. As suggested in the Unity Docs, I declared it as public static Quaternion fixedOffset = Quaternion.Euler(90, 90, 0) instead of just public and this seems to do the job! For some reason, the Inspector was messing with things. Don't know, why the original plugin didn't declare it as static. Hope this might help anyone struggling with the same GoCube-Unity implementation.
Try converting the quaternion into Unity's left handed coordinate system.
The correct conversion is as follows:
Original quaternion:
q = w + xi + yj + zk
Transformed into opposite coordinate system:
q = w - xi - yj - zk
Since Unity constructs quaternion in this manner:
Quaternion q = new Quaternion(x, y, z, w)
the transformed Quaternion can be constructed as follows:
Quaternion qTrans = new Quaternion(-q.x, -q.y, -q.z, q.w)
Related
Basically I am looking for a simple way for using the rigidbody/physics engine to have my ship look at another object (the enemy). I figured getting the direction between my transform and the enemy transform, converting that to a rotation, and then using the built in MoveRotation might work but it is causing this weird effect where it just kind of tilts the ship. I posted the bit of code as well as images of before and after the attempt to rotate the ship (The sphere is the enemy).
private void FixedUpdate()
{
//There is a user on the ship's control panel.
if (user != null)
{
var direction = (enemyOfFocus.transform.position - ship.transform.position);
var rotation = Quaternion.Euler(direction);
ship.GetComponent<Rigidbody>().MoveRotation(rotation);
}
}
Before.
After.
Well the Quaternion.Euler is a rotation about the given angles, for convenience provided as a Vector3.
Your direction is rather a vector pointing from ship towards enemyOfFocus and has a magnitude so the x, y, z values also depend on the distance between your objects. These are no Euler angles!
What you rather want is Quaternion.LookRotation (the example in the docs is basically exactly your use case ;) )
var direction = enemyOfFocus.transform.position - ship.transform.position;
var rotation = Quaternion.LookRotation(direction);
ship.GetComponent<Rigidbody>().MoveRotation(rotation);
I cannot get my head around Quaternions, and have tried many combination of operators on this (after reading / viewing tutorials). No success - just shows my lack of understanding. Even tried it with angles and atan2() - but all that really showed me was what gimbal lock was. The problem I am trying to solve is:
I have a component (say A, a cube) with some rotation in the world frame of reference (doesn't actually matter what that rotation is, could be orientated any direction).
Then I have another component (say B, another cube) elsewhere in the world space. Doesn't matter what that component's rotation is, either.
What I want to get to in the end are the (Euler) angles of B relative to A, BUT relative to A's frame of reference, not world space. The reason is I want to add forces to A based on B's direction from A, which I can then do with A.rigidbody.addRelativeForce(x, y, z); .
Stuck.
With thanks for the comments, I think I have it.
For reference, considering a vector and rotation within the same (world frame) coordinates, a new vector representing the original, but viewed from the perspective of local frame coordinates, being those of the original world rotation, can be calculated as:
static Vector3 createVectorForLocalRotationFrame(Vector3 worldVector, Quaternion worldRotation, bool normalise)
{
Vector3 localVector = Quaternion.Inverse(worldRotation) * worldVector;
if (normalise)
localVector.Normalize();
return localVector;
}
The normalise option is just something I needed. Don't ask me how it works, but it appears to be right.
In unity documentation, under Transform.rotation, there is written:
Do not attempt to edit/modify rotation.
But few lines after, in the example, the assignment is used transform.rotation = ...
And setting transform.rotation directly seems to be everyday practice.
But can it be recommended? What about other properties, like localRotation, is the setter of the rotation property implemented so that the localRotation is changed too?
I feel like the wording on this in the docs used to be more clear, but this is what it implies:
It is highly discouraged to edit the transform.rotation of an object directly. that is to say doing something like transform.rotation = new Quaternion(0, 100, 15, 1); unless you really know the ins and outs of working with Quaternions, which are a great deal harder than working with EulerAngles which is more user friendly and easier to understand.
What you should be using instead (and this is also reflected in the code samples of the docs) are the methods made available by Unity to alter the rotation. These methods will deal with the complexity that comes with changing Quaternion values for you.
If we take a closer look at the code samples Unity provides:
Quaternion target = Quaternion.Euler(tiltAroundX, 0, tiltAroundZ);
transform.rotation = Quaternion.Slerp(transform.rotation, target, Time.deltaTime * smooth);
The two key factors here being target = Quaternion.Euler() which takes in three euler angles (angles on a 360 degree scale) and transforms them to Quaternions for you. And the other being rotation = Quaternion.Slerp() which takes in the current rotation as a quaternion, and a target rotation as quaternion and interpolates between the two.
Notice that neither of these two functions alters the transform.rotation "directly" from your side, but both pass through Unity's internal logic for converting to proper quaternions.
the same goes for methods like transform.Rotate.
transform.localRotation is basically the same as transform.rotation. The only difference being that the former indicates the rotation relative to the parent, and the latter its rotation relative to the world.
If you want to edit the rotation of an object directly it is easiest to edit the EulerAngles of the object. An example being:
transform.localEulerAngles = new Vector3(10, 150, 0);
Which will rotate your object 10 degrees around the X axis and 150 degrees along the Y axis, relative to its parent. Using transform.eulerAngles would rotate it relative to the world.
Note that, despite this being the easiest way to go around it. Unity does encourage using the Quaternions class and its functions to apply rotations, as using Euler angles can cause weird effects to happen.
There is also this doc talking about rotation and orientation in Unity which also states (emphasis mine):
Quaternions can be used to represent the orientation or rotation of a GameObject. This representation internally consists of four numbers (referenced in Unity as x, y, z & w) however these numbers don’t represent angles or axes and you never normally need to access them directly. Unless you are particularly interested in delving into the mathematics of Quaternions, you only really need to know that a Quaternion represents a rotation in 3D space and you never normally need to know or modify the x, y & z properties.
The reason for Unity using quaternions is, among others, that quaternions do not suffer from gimbal lock, which eulerAngles does. I quote from the article:
Unity stores all GameObject rotations internally as Quaternions, because the benefits outweigh the limitations.
The components of transform.rotation are not angles and are not able to be handled as angles.
As robertbu mentions in this article:
Many rotations (and likely the one you were attempting) can be
accomplished through other (simpler) methods, including
'Transform.Roate()' and directly accessing 'Transform.eulerAngles'-..
Transform.Rotate() - Rotate() does a relative rotation, and by
default, a local rotation-..
What they mean by not editing/modifying rotation is that rotation is a built in class by unity that handles rotations for objects.
So feel free to set the rotation of an object
like so:
Transform.rotation = object.transform.rotation;
But modifying the built in rotation class isn't recommended.
I really am asking this as a last resort. I haven't been able to solve this for 2 days now. So if someone knows a thing or two about 3D, Matrices and animation I would really appreciate your input.
I have downloaded this project and implemented it into my own project: http://xbox.create.msdn.com/en-US/education/catalog/sample/skinned_model.
The character in my game move his hands as he casts a spell. I have successfully made this animation and imported it into the project. But I need to spawn particles inside the palms of his hands which move according to an animation. All I need is the 3D position of the palms of his hands after the animation has been applied.
Picture of hands during the animation:
http://s18.postimg.org/qkaipufa1/hands.png
If you take a look at the skinned model sample project: Class: AnimationPlayer.cs you will notice that it processes the matrices 3 times:
public void Update(TimeSpan time, bool relativeToCurrentTime,
Matrix rootTransform)
{
UpdateBoneTransforms(time, relativeToCurrentTime);
UpdateWorldTransforms(rootTransform);
UpdateSkinTransforms();
}
And allows us to access them after each of the steps:
/// Gets the current bone transform matrices, relative to their parent bones.
/// </summary>
public Matrix[] GetBoneTransforms()
{
return boneTransforms;
}
/// <summary>
/// Gets the current bone transform matrices, in absolute format.
/// </summary>
public Matrix[] GetWorldTransforms()
{
return worldTransforms;
}
/// <summary>
/// Gets the current bone transform matrices,
/// relative to the skinning bind pose.
/// </summary>
public Matrix[] GetSkinTransforms()
{
return skinTransforms;
}
I should also mention that I know the index of the bone in the palm and the index of all its parents:
10 - 11's parent, Root bone
11 - 12's parent
12 - 13's parent
13 - 14's parent
14 - 15's parent
15 - 16's parent
16 - The bone in the palm
As far as I understand this project is that all of the GetXXXXXXXX Commands I listed above return an array of Matrix[] ordered according to the index of the bone. So to get the Transform of Bone 10. I believe the code will look like:
Matrix[] M = animtionplayer.GetSkinTransforms();
Matrix transform = M[10];
OK, now for the parts I don't understand.
I don't know which of the 3 GetXXXXXXXXX functions I need to use to get the palms position.
I think the way the shaders calculate the position of the bones is by multiplying them by each of their parent bones. So:
Matrix[] M = animtionplayer.GetBoneTransforms();
Matrix transform = M[10];
transform = transform * M[11];
transform = transform * M[12];
transform = transform * M[13];
transform = transform * M[14];
transform = transform * M[15];
transform = transform * M[16];
//maybe apply the world position of the model?
transform = transform * MyWorld;
And then maybe to get a vector3 position.
Vector3 HandPosition = transform.Up;
Well when I try the solution above I get mixed results. With certain bones it moves correctly for the middle section of the animation. But honestly nothing good. Can someone explain whats going on here? How do I get the position of the bone in the palm? I'm really in the dark here. I only learnt what a matrix was 2 months ago, and animation with bones only this week.
Alright so this bug was quiet frustrating so I took a break from coding for about a week. I started working on it again 2 days ago. Trying many random things.... looking at the values of the matrices looking for patterns and I finally got it.
Here is an image where the hands are animated and all the bones are highlighted perfectly with big white spheres. And it follows the animation perfectly too!
http://s27.postimg.org/f48i7ae4x/2014_07_30_18_H14_M20_S.jpg
The code:
Matrix[] BoneTransforms = animtionPlayer.GetWorldTransforms();
for (int i = 0; i < BoneTransforms.Length; i++)
{
Vector3 v3 = new Vector3(BoneTransforms[i].M41, BoneTransforms[i].M42, BoneTransforms[i].M43);
v3 = Vector3.Transform(v3, GetWorld(Position, Rotation, Scale));//GetWorld is the models world matrix: position, rotation and scale.
Game1.DrawMarker(v3); //v3 is the position of the current Bone.
}
I just made that code by watching all of the values of the specific Matrix. I learnt that M41, M42, M43 are the X, Y, Z cords from the specific bone.
Reopen your mesh designer, put an empty in his hands about where the particle effect should be, parent the empty to the "palm" bones and keep track of what direction in local coords the empty is facing so you can face the particles in the right direction, rebake the animation and reimport it. I will admit I'm not familiar with the "mesh skinner" you're using - but I would imagine it supports child objects of bones like empties as they're frequently employed for such purposes. You're overthinking this. If you can't do this with the X-box kit, I don't know.
A "transform matrix" isn't necessarily the location of the bone - so I'm confused about what these functions return. They probably just return the matrix it uses to calculate the position of the vertices which are weighted to those bones. There isn't a function to return the "world position" of a palm bone and simply track your animation to the world position of the empty or palm bone? I'm confused what you need to use the transform matrices for. If the library you're using is good, there will almost certainly be a way to get a bone's world coordinate in which case adding a "tracking bone" just outside the palm will let you track animations or objects to that bone or even directly parent them.
I have my code I wrote for displaying a mirror as a plane with a texture from a RenderTarget2D each frame.
This all works perfectly fine, but I still think that there is something wrong in the way the reflection goes (like, the mirror isn't looking exacly where it's supposed to be looking).
There's a screenshot of the mirror that doesn't really look bad, the distort mainly occurs when the player gets close to the mirror.
Here is my code for creating the mirror texture, notice that the mirror is rotated by 15 degrees on the X axis.
RenderTarget2D rt;
...
rt = new RenderTarget2D(device, (int)(graphics.PreferredBackBufferWidth * 1.5), (int)(graphics.PreferredBackBufferHeight * 1.5));
...
device.SetRenderTarget(rt);
device.Clear(Color.Black);
Vector3 camerafinalPosition = camera.position;
if (camera.isCrouched) camerafinalPosition.Y -= (camera.characterOffset.Y * 6 / 20);
Vector3 mirrorPos = new Vector3((room.boundingBoxes[8].Min.X + room.boundingBoxes[8].Max.X) / 2, (room.boundingBoxes[8].Min.Y + room.boundingBoxes[8].Max.Y) / 2, (room.boundingBoxes[8].Min.Z + room.boundingBoxes[8].Max.Z) / 2);
Vector3 cameraFinalTarget = new Vector3((2 * mirrorPos.X) - camera.position.X, (2 * mirrorPos.Y) - camerafinalPosition.Y, camera.position.Z);
cameraFinalTarget = Vector3.Transform(cameraFinalTarget - mirrorPos, Matrix.CreateRotationX(MathHelper.ToRadians(-15))) + mirrorPos;
Matrix mirrorLookAt = Matrix.CreateLookAt(mirrorPos, cameraFinalTarget, Vector3.Up);
room.DrawRoom(mirrorLookAt, camera.projection, camera.position, camera.characterOffset, camera.isCrouched);
device.SetRenderTarget(null);
And then the mirror is being drawn using the rt texture.
I supposed something isn't completly right with the reflection physics or the way I create the LookAt matrix, Thanks for the help.
I didn't use XNA, but I did some Managed C# DX long time ago, so I don't remember too much, but are you sure mirrorLookAt should point to a cameraFinalTarget? Because basically the Matrix.CreateLookAt should create a matrix out of from-to-up vectors - 'to' in your example is a point where mirror aims. You need to calculate a vector from camera position to mirror position and then reflect it, and I don't see that in your code.
Unless your room.DrawRoom method doesn't calculate another mirrorLookAt matrix, I'm pretty sure your mirror target vector is the problem.
edit: Your reflection vector would be
Vector3 vectorToMirror = new Vector3(mirrorPos.X-camera.position.Y, mirrorPos.Y-camera.position.Y, mirrorPos.Z-camera.position.Z);
Vector3 mirrorReflectionVector = new Vector3(vectorToMirror-2*(Vector3.Dot(vectorToMirror, mirrorNormal)) * mirrorNormal);
Also I don't remember if the mirrorReflectionVector shouldn't be inverted (whether it is pointing to mirror or from mirror). Just check both ways and you'll see. Then you create your mirrorLookAt from
Matrix mirrorLookAt = Matrix.CreateLookAt(mirrorPos, mirrorReflectionVector, Vector3.Up);
Though I don't know wher the normal of your mirror is. Also, I've noticed 1 line I can't really understand
if (camera.isCrouched) camerafinalPosition.Y -= (camera.characterOffset.Y * 6 / 20);
What's the point of that? Let's assume your camera is crouched - shouldn't its Y value be lowered already? I don't know how do you render your main camera, but look at the mirror's rendering position - it's way lower than your main eye. I don't know how do you use your IsCrouched member, but If you want to lower the camera just write yourself a method Crouch() or anything similar, that would lower the Y value a little. Later on you're using your DrawRoom method, in which you pass camera.position parameter - yet, it's not "lowered" by crouch value, it's just "pure" camera.position. That may be the reason it's not rendering properly. Let me know If that helped you anything.