How would you lerp the positions of two matrices? - c#

In my code, I create 2 matrices that store information about two different meshes, using the following code:
Mesh mesh;
MeshFilter filter;
public SphereMesh SphereMesh;
public CubeMesh CubeMesh;
public Material pointMaterial;
public Mesh pointMesh;
public List<Matrix4x4> matrices1 = new List<Matrix4x4>();
public List<Matrix4x4> matrices2 = new List<Matrix4x4>();
[Space]
public float normalOffset = 0f;
public float globalScale = 0.1f;
public Vector3 scale = Vector3.one;
public int matricesNumber = 1; //determines which matrices to store info in
public void StorePoints()
{
Vector3[] vertices = mesh.vertices;
Vector3[] normals = mesh.normals;
Vector3 scaleVector = scale * globalScale;
// Initialize chunk indexes.
int startIndex = 0;
int endIndex = Mathf.Min(1023, mesh.vertexCount);
int pointCount = 0;
while (pointCount < mesh.vertexCount)
{
// Create points for the current chunk.
for (int i = startIndex; i < endIndex; i++)
{
var position = transform.position + transform.rotation * vertices[i] + transform.rotation * (normals[i].normalized * normalOffset);
var rotation = Quaternion.identity;
pointCount++;
if (matricesNumber == 1)
{
matrices1.Add(Matrix4x4.TRS(position, rotation, scaleVector));
}
if (matricesNumber == 2)
{
matrices2.Add(Matrix4x4.TRS(position, rotation, scaleVector));
}
rotation = transform.rotation * Quaternion.LookRotation(normals[i]);
}
// Modify start and end index to the range of the next chunk.
startIndex = endIndex;
endIndex = Mathf.Min(startIndex + 1023, mesh.vertexCount);
}
}
The GameObject starts as a Cube mesh, and this code stores the info about the mesh in matrices1. Elsewhere in code not shown, I have it so that the Mesh changes to a Sphere and then changes matricesnumber to 2, then triggers the code above to store the info of the new Sphere mesh in matrices2.
This seems to be working, as I'm able to use code like Graphics.DrawMesh(pointMesh, matrices1[i], pointMaterial, 0);
to draw 1 mesh per vertex of the Cube mesh. And I can use that same line (but with matrices2[i]) to draw 1 mesh per vertex of the Sphere mesh.
The Question: How would I draw a mesh for each vertex of the Cube mesh (info stored in matrices1) on the screen and then make those vertex meshes Lerp into the positions of the Sphere mesh's (info stored in matrices2) vertices?
I'm trying to hack at it with things like
float t = Mathf.Clamp((Time.time % 2f) / 2f, 0f, 1f);
matrices1.position.Lerp(matrices1.position, matrices2.position, t);
but obviously this is invalid code. What might be the solution? Thanks.

Matrix4x4 is usually only used in special cases for directly calculating transformation matrices.
You rarely use matrices in scripts; most often using Vector3s, Quaternions and functionality of Transform class is more straightforward. Plain matrices are used in special cases like setting up nonstandard camera projection.
In your case it seems to me you actually only need to somehow store and access position, rotation and scale.
I would suggest to not use Matrix4x4 at all but rather something simple like e.g.
// The Serializable makes them actually appear in the Inspector
// which might be very helpful in you case
[Serializable]
public class TransformData
{
public Vector3 position;
public Quaternion rotation;
public Vector3 scale;
}
Then with
public List<TransformData> matrices1 = new List<TransformData>();
public List<TransformData> matrices2 = new List<TransformData>();
you could simply lerp over
var lerpedPosition = Vector3.Lerp(matrices1[index].position, matrices2[index].position, lerpFactor);
And if you then need it as a Matrix4x4 you can still create it ad-hoc like e.g.
var lerpedMatrix = Matrix4x4.Translation(lerpedPosition);
or however you want to use the values.

Related

Perlin Noise wave effect on a 3d sphere

I have been trying to figure out how to create wave effect on a 3d sphere using Perlin Noise
I have found some tutorials on how to do it on a plane, however, none on a 3d object,
This code works just fine on a plane, does anyone know how to adapt it on a 3d sphere ?
Thank you in advance for your help
using UnityEngine;
using System.Collections;
public class PerlinTerrain : MonoBehaviour {
public float perlinScale;
public float waveSpeed;
public float waveHeight;
public float offset;
void Update () {
CalcNoise();
}
void CalcNoise() {
MeshFilter mF = GetComponent<MeshFilter>();
MeshCollider mC = GetComponent<MeshCollider>();
mC.sharedMesh = mF.mesh;
Vector3[] verts = mF.mesh.vertices;
for (int i=0; i< verts.Length; i++) {
float pX = (verts[i].x * perlinScale) + (Time.timeSinceLevelLoad * waveSpeed) + offset;
float pZ = (verts[i].z * perlinScale) + (Time.timeSinceLevelLoad * waveSpeed) + offset;
verts[i].y = Mathf.PerlinNoise(pX, pZ) * waveHeight;
}
mF.mesh.vertices = verts;
mF.mesh.RecalculateNormals();
mF.mesh.RecalculateBounds();
}
}
The easiest way would be to apply the 2D noise to the surface. You could do this by using the vertex normal to know which way you should move the position to.
In your code you are only moving the Y position. For a plane this means you moved the vertex in its normal direction by the amount that the noise function gives you. For a spheere you could do something like this:
verts[i] = verts[i] + (Mathf.PerlinNoise(pX, pZ) * waveHeight * mF.mesh.normals[i].normalized);
This will work for any mesh because it takes in account its original position and just moves the vertex in the direction of its normal.
This will move the vertex up which means it will always be higher than original position, you can apply a simple offset to the noise function to make it go lower as well
verts[i] = verts[i] + ((Mathf.PerlinNoise(pX, pZ) - 0.5f) * waveHeight * mF.mesh.normals[i].normalized);
I noticed you are animating this noise, its important to take the original mesh each update and not the new one that had noise applied to it. So in the end it would look something like this
using UnityEngine;
using System.Collections;
public class PerlinTerrain : MonoBehaviour
{
public float perlinScale;
public float waveSpeed;
public float waveHeight;
public float offset;
Vector3[] baseVertices;
private void OnEnable()
{
MeshFilter mF = GetComponent<MeshFilter>();
baseVertices = mF.mesh.vertices;
}
void Update()
{
CalcNoise();
}
void CalcNoise()
{
MeshFilter mF = GetComponent<MeshFilter>();
mF.sharedMesh.vertices = baseVertices;
mF.sharedMesh.RecalculateNormals();
Vector3[] verts = mF.sharedMesh.vertices;
for (int i = 0; i < verts.Length; i++)
{
float pX = (verts[i].x * perlinScale) + (Time.timeSinceLevelLoad * waveSpeed) + offset;
float pZ = (verts[i].z * perlinScale) + (Time.timeSinceLevelLoad * waveSpeed) + offset;
verts[i] = verts[i] + ((Mathf.PerlinNoise(pX, pZ)) * waveHeight * mF.sharedMesh.normals[i].normalized);
}
mF.sharedMesh.vertices = verts;
}
}
But you still have 1 problem. There are multiple vertices that have the same position but have different normal. You will have to figure out which vertices are shared and calculate its normal.
Group the vertices by its position and take the average normal and only iterate over each unique vertex once.
Because it would be called simplex noise
https://en.wikipedia.org/wiki/Simplex_noise
Perlin is the specific name of 2D version

How to draw latitude/longitude lines on the surface of a Sphere in Unity 3D?

I'm a beginner of Unity 3D. And I'm trying to create a globe with Unity 3D as shown below. I created a Sphere game object on a scene and set the radius as 640. Then, I want to draw latitude/longitude lines (every 10 degree) on the surface of this Sphere in C# script.
I tried to draw each lat/long line by using LineRender, but did not get it work.
My code:
public class EarthController : MonoBehaviour {
private float _radius = 0;
// Use this for initialization
void Start () {
_radius = gameObject.transform.localScale.x;
DrawLatLongLines();
}
// Update is called once per frame
void Update()
{
}
private void DrawLatLongLines()
{
float thetaStep = 0.0001F;
int size = (int)((2.0 * Mathf.PI) / thetaStep);
// draw lat lines
for (int latDeg = 0; latDeg < 90; latDeg += 10)
{
// throw error here.
// seems I cannot add more than one component per type
LineRenderer latLineNorth = gameObject.AddComponent<LineRenderer>();
latLineNorth.startColor = new Color(255, 0, 0);
latLineNorth.endColor = latLineNorth.startColor;
latLineNorth.startWidth = 0.2F;
latLineNorth.endWidth = 0.2F;
latLineNorth.positionCount = size;
LineRenderer latLineSouth = Object.Instantiate<LineRenderer>(latLineNorth);
float theta = 0;
var r = _radius * Mathf.Cos(Mathf.Deg2Rad * latDeg);
var z = _radius * Mathf.Sin(Mathf.Deg2Rad * latDeg);
for (int i = 0; i < size; i++)
{
var x = r * Mathf.Sin(theta);
var y = r * Mathf.Cos(theta);
Vector3 pos = new Vector3(x, y, z);
latLineNorth.SetPosition(i, pos);
pos.z = -z;
latLineSouth.SetPosition(i, pos);
theta += thetaStep;
}
}
}
}
What's the correct way to do this?
I don't want to write custom shader (if possible) since I know nothing about it.
The usual way to customize the way 3d objects look is to use shaders.
In your case, you would need a wireframe shader, and if you want control on the number of lines, then you might have to write it yourself.
Another solution is to use a texture. In unity, you will have many default materials that will apply a texture on your object. You can apply an image texture that contains your lines.
If you don't want a texture and really just the lines, you could use the line renderer. LineRenderer doesn't need a 3D object to work. You just give it a number of points and it is going to link them with a line.
Here is how I would do it:
Create an object with a line renderer and enter points that create a
circle (You can do it dynamically in c# or manually in the editor on
the line renderer).
Store this as a prefab (optional) and duplicate it in your scene
(copy paste. Each copy draws a new line
Just be modifying the rotation, scale and position of your lines you
can recreate a sphere
If your question is "What is the equation of a circle so I can find the proper x and y coord?" here is a short idea to compute x and y coord
for(int i =0; i< nbPointsOnTheCircle; ++i)
{
var x = Mathf.Cos(nbPointsOnTheCircle / 360);
var y = Mathf.Sin(nbPointsOnTheCircle / 360);
}
If your question is "How to assign points on the line renderer dynamicaly with Unity?" here is a short example:
public class Circle : MonoBehavior
{
private void Start()
{
Vector3[] circlePoints = computePoints(); // a function that compute points of a circle
var lineRenderer = GetComponent<LineRenderer>();
linerenderer.Positions = circlePoints;
}
}
EDIT
You can only have one per object. This is why the example above only draws one circle. You already have a earth controller, but this controller can't add many LineRenderes to itself. Instead, the idea would be that the earth object has a script that does the something like following:
private void Start()
{
for(int i=0; i<nbLines;++i)
{
GameObject go = new GameObject();
go.AddComponent<LineRenderer>();
go.AddCOmponent<Circle>();
go.transform.SetParent = transform;
go.name = "Circle" + i;
}
}
Then you will see several objects created in you scene, each having exactly one LineRenderer and one Circle component
From there you should be able to do what you want (for instance, pass parameters to the Circle so each Circle is a bit different)

Problems when constructing a Fractal Tree

I've been trying to use what I learned in the this fractal tutorial to make something I can use for my own project. I want to a script that can generate trees in Unity. I feel I've made a lot of progress, but have hit a wall.
The way I've set up the script is so that it scales the branches according to two parameters:
a) a publich 'childScale' variable.
b) the amount of branches sprouting from the previous branch.
The biggest problem has been that, when children are parented to a non-uniform-scale object, they become distorted in unintended ways. To bypass, I've made the prefab instances (which are 1, 1, 1 by default) children of other GameObjects. These GameObjects are also parents to other GameObjects that contain other prefab instances. This is problematic because the scaling principle of a fractal necessitates a continuous inheritance of scale from the parent, but the child prefab instances never pass anything along. So I end up having to adjust the proportions of the prefab instances to what they would be if they could inherit directly from one another. The below script 'works' because of the exponential modifier (middle of Start(), after 'else'), but only if all branches have the same amount of offshoots, i.e. only if the public min and max Branch Density variables are the same integer.
To summarize, I have two problems that I'd like your input on.
The main problem: How can I maintain the scaling integrity of the fractal principle despite the lack of a continous hierarchy while allowing for non-uniformity in the overall form of the tree?
A secondary problem, by far, is that my 'thicknessScaler' variable makes my branches too thin, especially the more there are. Right now it just divides 1 by the amount of offshoots, so I'd need one that doesn't shave off quite as much.
using UnityEngine;
using System.Collections;
public class TreeFractal : MonoBehaviour
{
public GameObject[] branches; // the last branch is pointed, while all the others are rounded
public int maxDepth;
public float childScale;
public float maxTwist; // OFF temporarily
public Vector3 baseBranchSize; // proportions of instantiated prefab
public int minBranchDensity; // amount of offshoots per node, randomized
public int maxBranchDensity;
public float branchTilt;
private int depth;
private int branchDensity;
private GameObject branch;
private GameObject instance;
private TreeFractal grandparent;
private float displace;
private float thicknessScaler = 1;
private float parentDensity;
private void Start()
{
if (depth < maxDepth)
branch = branches[0];
else
branch = branches[1];
instance = Instantiate(branch);
instance.transform.parent = transform; // prefabs (non-uniform proportions) are made the children of uniform-scaled Game Objects.
if (depth == 0)
{
displace = baseBranchSize.y;
instance.transform.localScale = baseBranchSize;
}
else //Multiplication by density^depth is to make up for the shrinking game objects, as the prefab instances do not pass on their scale
//while the GameObjects do. This only works when all 'depths' of the tree have the same amount of offshoots.
//Because the GameObject must remain uniform, all scaling of the y axis must occur in the prefab instance.
{
displace = baseBranchSize.y * Mathf.Pow(parentDensity, depth);
//if (depth == 2)3
print(baseBranchSize.y * Mathf.Pow(parentDensity, depth));
instance.transform.localScale = new Vector3
(
baseBranchSize.x,
baseBranchSize.y * Mathf.Pow(parentDensity, depth),
baseBranchSize.z
);
}
instance.transform.localPosition = new Vector3(0f, 1f * (displace / 2), 0f); //prefab instance pivots at one end of the GameObject, for rotations.
instance.transform.localRotation = Quaternion.Euler(0f, 0f, 0f);
transform.Rotate(Random.Range(-maxTwist * ((float)(depth + 1)/ maxDepth), maxTwist * ((float)(depth + 1) / maxDepth)), 0f, 0f);
//increases the potential randomized twist more, the smaller the branches get.
if (depth < maxDepth)
{
StartCoroutine(CreateChildren());
}
}
private void Update()
{
}
private IEnumerator CreateChildren()
{
branchDensity = Random.Range(minBranchDensity, maxBranchDensity + 1);
for (int i = 0; i < branchDensity; i++)
{
yield return new WaitForSeconds(Random.Range(0.1f, 0.5f));
Quaternion quaternion = BranchRotater(branchDensity, i);
new GameObject("Fractal Child").AddComponent<TreeFractal>().
Initialize(this, i, quaternion);
}
}
private Quaternion BranchRotater (int density, int childIndex) //returns the rotation of the branch depending on the index and amount of branches.
{
Quaternion quaternion = Quaternion.Euler
(0f,
(360 / density) * childIndex,
branchTilt
);
return quaternion;
}
private void Initialize(TreeFractal parent, int childIndex, Quaternion quaternion)
{
branches = parent.branches;
branchTilt = parent.branchTilt;
maxDepth = parent.maxDepth;
depth = parent.depth + 1;
baseBranchSize = parent.baseBranchSize;
maxTwist = parent.maxTwist;
transform.parent = parent.transform;
childScale = parent.childScale;
minBranchDensity = parent.minBranchDensity;
maxBranchDensity = parent.maxBranchDensity;
thicknessScaler = 1f / parent.branchDensity; // I need a better equation here. This scaler is too small.
transform.localScale = Vector3.one * childScale * thicknessScaler; // reproportions all 3 axes of child GameObject so that
//the child remains of uniform scale. This must then be compensated for in the scaling of said object's child-prefab.
parentDensity = parent.branchDensity;
transform.localPosition = Vector3.up * parent.displace; //positions child relative to its parent
transform.localRotation = quaternion;
}
}
If you build your prefab like this you can apply the parent-to-child scale at the root node of the prefab and the nonuniform at the subnode. If you now attach the child's root to the root node only the uniform scale will carry over.
prefab:
[root]
|--->[non-uni]
| |---> mesh
.
.
[childRoot]

Create GameObjects the have gravity based on size

I've built a fractal based object generator in c# and unity that builds branches of objects that then bounce off each other using Colliders and Rigidbodies. Right now they hit each other and keep moving farther and farther apart. What I'd like to do it assign each object a certain level of gravitational attraction so that even as they're repelled through a collision they draw themselves back in. I've got everything except working except for the gravity side of things. Does anyone have experience with this who wouldn't mind giving me some direction? Thanks!
using UnityEngine;
using UnityEngine.UI;
using System.Collections;
public class BuildFractal : MonoBehaviour
{
public Mesh[] meshes;
public Material material;
public Material[,] materials;
private Rigidbody rigidbody;
public int maxDepth; // max children depth
private int depth;
public float childScale; // set scale of child objects
public float spawnProbability; // determine whether a branch is created or not
public float maxRotationSpeed; // set maximium rotation speed
private float rotationSpeed;
public float maxTwist;
public Text positionText;
// Create arrays for direction and orientation data
private static Vector3[] childDirections = {
Vector3.up,
Vector3.right,
Vector3.left,
Vector3.forward,
Vector3.back,
// Vector3.down
};
private static Quaternion[] childOrientations = {
Quaternion.identity,
Quaternion.Euler(0f, 0f, -90f),
Quaternion.Euler(0f, 0f, 90f),
Quaternion.Euler(90f, 0f, 0f),
Quaternion.Euler(-90f, 0f, 0f),
// Quaternion.Euler(180f, 0f, 0f)
};
private void Start ()
{
rotationSpeed = Random.Range(-maxRotationSpeed, maxRotationSpeed);
transform.Rotate(Random.Range(-maxTwist, maxTwist), 0f, 0f);
if (materials == null)
{
InitializeMaterials();
}
// Select from random range of meshes
gameObject.AddComponent<MeshFilter>().mesh = meshes[Random.Range(0, meshes.Length)];
// Select from random range of colors
gameObject.AddComponent<MeshRenderer>().material = materials[depth, Random.Range(0, 2)];
// Add a collider to each object
gameObject.AddComponent<SphereCollider>().isTrigger = false;
// Add Rigigbody to each object
gameObject.AddComponent<Rigidbody>();
gameObject.GetComponent<Rigidbody>().useGravity = false;
gameObject.GetComponent<Rigidbody>().mass = 1000;
// Create Fractal Children
if (depth < maxDepth)
{
StartCoroutine(CreateChildren());
}
}
private void Update ()
{
transform.Rotate(0f, rotationSpeed * Time.deltaTime, 0f);
}
private IEnumerator CreateChildren ()
{
for (int i = 0; i < childDirections.Length; i++)
{
if (Random.value < spawnProbability)
{
yield return new WaitForSeconds(Random.Range(0.1f, 1.5f));
new GameObject("Fractal Child").AddComponent<BuildFractal>().Initialize(this, i);
}
/*if (i == childDirections.Length)
{
DestroyChildren();
}*/
// positionText.text = transform.position.ToString(this);
}
}
private void Initialize (BuildFractal parent, int childIndex)
{
maxRotationSpeed = parent.maxRotationSpeed;
// copy mesh and material references from parent object
meshes = parent.meshes;
materials = parent.materials;
maxTwist = parent.maxTwist;
// set depth and scale based on variables defined in parent
maxDepth = parent.maxDepth;
depth = parent.depth + 1;
childScale = parent.childScale;
transform.parent = parent.transform; // set child transform to parent
// transform.localScale = Vector3.one * childScale;
transform.localScale = Vector3.one * Random.Range(childScale / 10, childScale * 1);
transform.localPosition = childDirections[childIndex] * (Random.Range((0.1f + 0.1f * childScale),(0.9f + 0.9f * childScale)));
transform.localRotation = childOrientations[childIndex];
spawnProbability = parent.spawnProbability;
}
private void InitializeMaterials ()
{
materials = new Material[maxDepth + 1, 2];
for (int i = 0; i <= maxDepth; i++)
{
float t = i / (maxDepth - 1f);
t *= t;
// Create a 2D array to hold color progressions
materials[i, 0] = new Material(material);
materials[i, 0].color = Color.Lerp(Color.gray, Color.white, t);
materials[i, 1] = new Material(material);
materials[i, 1].color = Color.Lerp(Color.white, Color.white, t);
}
// materials[maxDepth, 0].color = Color.white;
materials[maxDepth, 1].color = Color.white;
}
}
Depends how accurate your gravity simulation has to be. Assuming all objects in your simulation have the same density, you could use Mesh.bounds to roughly estimate their volume:
Vector3 size = myMesh.bounds.size;
float volume = size.x * size.y * size.z * scale; // scale could be childScale in your case
Since your simulation is a fractal, you will have to apply childScale in each of your fractal's iterations. But you don't have to recalculate the base volume of your mesh if it doesn't change.
As for the gravity simulation:
This might get quite complex with a high number of objects. You would have to simulate a whole gravity field.
The calculation for only two objects interacting with each other is rather simple. The forces applied to the bodies attracting each other can be calculated by the Newtonian formula
F1 = F2 = G * m1 * m2 / r^2
(see: https://en.wikipedia.org/wiki/Gravitational_constant)
But you may have far more objects than two in your system. You would have to calculate the above relationship for each object -- between each object. And for each object, you would have to add all calculated forces and than apply the resulting force.
Let's say you have N objects in your scene, you would have to do (N-1) of the above calculations for each object. That yields N^(N-1) calculations, which will get out of hand quite quickly, especially if you doing it in a fractal structure.
To get hold of this immense complexity, you could introduce a range of influence, so only nearby objects have an effect on each other. Although this will further reduce the accuracy of your simulation.

Only seeing 1 object out of an array of 3 in XNA

Can anyone see where I am going wrong here.
I have a CameraObject class (its not a camera, simply the Model of a box to represent a "camera") that has a Model and a Position. It also has the usual LoadContent(), Draw() and Update() methods.
However, when I draw the array of Models, I only see 1 model on the screen (well, there might be 3 but they might all be in the same location)?
The Draw() method for the CameraModel class looks like this:
public void Draw(Matrix view, Matrix projection)
{
transforms = new Matrix[CameraModel.Bones.Count];
CameraModel.CopyAbsoluteBoneTransformsTo(transforms);
// Draw the model
foreach(ModelMesh myMesh in CameraModel.Meshes)
{
foreach (BasicEffect myEffect in myMesh.Effects)
{
myEffect.World = transforms[myMesh.ParentBone.Index];
myEffect.View = view;
myEffect.Projection = projection;
myEffect.EnableDefaultLighting();
myEffect.SpecularColor = new Vector3(0.25f);
myEffect.SpecularPower = 16;
}
myMesh.Draw();
}
}
Then in my Game1 class I create an array of CameraObject objects:
CameraObject[] cameraObject = new CameraObject[3];
Which I Initialize() - so each new object should be at +10 from the previous object
for (int i = 0; i < cameraObject.Length; i++)
{
cameraObject[i] = new CameraObject();
cameraObject[i].Position = new Vector3(i * 10, i * 10, i * 10);
}
And finally Draw()
Matrix view = camera.viewMatrix;
Matrix projection = camera.projectionMatrix;
for (int i = 0; i < cameraObject.Length; i++)
{
cameraObject[i].Draw(view, projection);
}
Where view and projection are from my Camera() class which looks like so:
viewMatrix = Matrix.Identity;
projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45.0f), 16 / 9, .5f, 500f);
But I only see 1 object drawn to the screen? I have stepped through the code and all seems well but I cant figure out why I cant see 3 objects?
Can anyone spot where I am going wrong?
This is the code in my Camera() class to UpdateViewMatrix:
private void UpdateViewMatrix(Matrix chasedObjectsWorld)
{
switch (currentCameraMode)
{
case CameraMode.free:
// To be able to rotate the camera and and not always have it looking at the same point
// Normalize the cameraRotation’s vectors, as those are the vectors that the camera will rotate around
cameraRotation.Forward.Normalize();
cameraRotation.Up.Normalize();
cameraRotation.Right.Normalize();
// Multiply the cameraRotation by the Matrix.CreateFromAxisAngle() function,
// which rotates the matrix around any vector by a certain angle
// Rotate the matrix around its own vectors so that it works properly no matter how it’s rotated already
cameraRotation *= Matrix.CreateFromAxisAngle(cameraRotation.Right, pitch);
cameraRotation *= Matrix.CreateFromAxisAngle(cameraRotation.Up, yaw);
cameraRotation *= Matrix.CreateFromAxisAngle(cameraRotation.Forward, roll);
// After the matrix is rotated, the yaw, pitch, and roll values are set back to zero
yaw = 0.0f;
pitch = 0.0f;
roll = 0.0f;
// The target is changed to accommodate the rotation matrix
// It is set at the camera’s position, and then cameraRotation’s forward vector is added to it
// This ensures that the camera is always looking in the direction of the forward vector, no matter how it’s rotated
target = Position + cameraRotation.Forward;
break;
case CameraMode.chase:
// Normalize the rotation matrix’s forward vector because we’ll be using that vector to roll around
cameraRotation.Forward.Normalize();
chasedObjectsWorld.Right.Normalize();
chasedObjectsWorld.Up.Normalize();
cameraRotation = Matrix.CreateFromAxisAngle(cameraRotation.Forward, roll);
// Each frame, desiredTarget will be set to the position of whatever object we’re chasing
// Then set the actual target equal to the desiredTarget, can then change the target’s X and Y coordinates at will
desiredTarget = chasedObjectsWorld.Translation;
target = desiredTarget;
target += chasedObjectsWorld.Right * yaw;
target += chasedObjectsWorld.Up * pitch;
// Always want the camera positioned behind the object,
// desiredPosition needs to be transformed by the chased object’s world matrix
desiredPosition = Vector3.Transform(offsetDistance, chasedObjectsWorld);
// Smooth the camera’s movement and transition the target vector back to the desired target
Position = Vector3.SmoothStep(Position, desiredPosition, .15f);
yaw = MathHelper.SmoothStep(yaw, 0f, .1f);
pitch = MathHelper.SmoothStep(pitch, 0f, .1f);
roll = MathHelper.SmoothStep(roll, 0f, .1f);
break;
case CameraMode.orbit:
// Normalizing the rotation matrix’s forward vector, and then cameraRotation is calculated
cameraRotation.Forward.Normalize();
// Instead of yawing and pitching over cameraRotation’s vectors, we yaw and pitch over the world axes
// By rotating over world axes instead of local axes, the orbiting effect is achieved
cameraRotation = Matrix.CreateRotationX(pitch) * Matrix.CreateRotationY(yaw) * Matrix.CreateFromAxisAngle(cameraRotation.Forward, roll);
desiredPosition = Vector3.Transform(offsetDistance, cameraRotation);
desiredPosition += chasedObjectsWorld.Translation;
Position = desiredPosition;
target = chasedObjectsWorld.Translation;
roll = MathHelper.SmoothStep(roll, 0f, .2f);
break;
}
// Use this line of code to set up the View Matrix
// Calculate the view matrix
// The up vector is based on how the camera is rotated and not off the standard Vector3.Up
// The view matrix needs an up vector to fully orient itself in 3D space, otherwise,
// the camera would have no way of knowing whether or not it’s upside-down
viewMatrix = Matrix.CreateLookAt(Position, target, cameraRotation.Up);
}
I'm not seeing in your code where your cameraObject[n].Position (which is probably the only thing that uniquely differentiates the position between the three models) is affecting the effect.World property.
effect.World = transforms[myMesh.ParentBone.Index];
does not typically take individual model position into account.
Try something like this:
for (int i = 0; i < cameraObject.Length; i++)
{
cameraObject[i].Draw(view, projection, cameraObject[i].Position);
}
//later in the draw method
public void Draw(Matrix view, Matrix projection, Vector3 pos)
{
// ...
myEffect.World = transforms[myMesh.ParentBone.Index] * Matrix.CreateTranslation(pos);
// ...
}
CameraObject[1] & [2] are located at a greater positive Z value than camera[0] and the view matrix is located at the origin & looking in a negative Z direction (remember, the view matrix is the inverted equivalent of a world matrix).
Instead of setting your viewMatrix to Matrix.Identity, set it to this and you might see all three:
viewMatrix = Matrix.CreateLookAt(new Vector3(0,0,75), Vector3.Zero, Vector3.Up);

Categories

Resources