GameObject can rotate around Y and Z but not X - c#

I'm new with Unity development and I'm having some issues rotating GameObjects.
I want to make and object rotate -90 degrees each time you call a function. At this moment I can make it rotate around its Y and Z axis and there's no trouble, however, if i change the parameters so it rotates around X it gets stuck after rotating from -90º (270º actually) to 180º.
Here's the code I'm using for testing:
public class RotateCube : MonoBehaviour
{
GameObject cube;
bool rotating;
private void Start()
{
cube = GameObject.Find("Cube");
rotating = false;
}
private void Update()
{
if (!rotating)
{
StartCoroutine(Rotate());
}
}
private IEnumerator Rotate()
{
rotating = true;
float finalAngle = SubstractNinety(cube.transform.eulerAngles.x);
while (cube.transform.rotation.eulerAngles.x != finalAngle)
{
cube.transform.rotation = Quaternion.RotateTowards(cube.transform.rotation, Quaternion.Euler(finalAngle, cube.transform.rotation.eulerAngles.y, cube.transform.rotation.eulerAngles.z), 100f * Time.deltaTime);
yield return null;
}
cube.transform.rotation = Quaternion.Euler(finalAngle, cube.transform.rotation.eulerAngles.y, cube.transform.rotation.eulerAngles.z);
yield return null;
rotating = false;
}
private float SubstractNinety(float angle)
{
if (angle < 90)
{
return 270f;
}
return angle - 90;
}
}
I'm updating all the coordinates in Quaternion.Euler in each iteration because I want the user to be able drag the object while it's rotating, but I wouldn't mind if the solution requires to define the Quaternion before the loop.

Why do you bother to go through the eulerAngles at all?
When using the .eulerAngles property to set a rotation, it is important to understand that although you are providing X, Y, and Z rotation values to describe your rotation, those values are not stored in the rotation. Instead, the X, Y & Z values are converted to the Quaternion's internal format.
When you read the .eulerAngles property, Unity converts the Quaternion's internal representation of the rotation to Euler angles. Because, there is more than one way to represent any given rotation using Euler angles, the values you read back out may be quite different from the values you assigned. This can cause confusion if you are trying to gradually increment the values to produce animation.
To avoid these kinds of problems, the recommended way to work with rotations is to avoid relying on consistent results when reading .eulerAngles particularly when attempting to gradually increment a rotation to produce animation. For better ways to achieve this, see the Quaternion * operator.
Rather directly use Quaternion! Simply add the desired rotation to the exsting one using the * operator
private IEnumerator Rotate()
{
rotating = true;
// This returns a new Quaternion rotation which is the original
// rotation and then rotated about -90° on the global X axis
var finalRotation = currentRotation * Quaternion.Euler(-90, 0, 0);
// simply directly use Quaternion comparison
while (cube.transform.rotation != finalRotation)
{
cube.transform.rotation = Quaternion.RotateTowards(cube.transform.rotation, finalRotation, 100f * Time.deltaTime);
yield return null;
}
cube.transform.rotation = finalRotation;
rotating = false;
}

Related

How to rotate cube smoothly

i am trying to rotate cube smoothly to 90 degrees every time i press space key. here in my code every time i decrease speed to less than 1 its rotation is not consistent at 90 decrease and speed at anything more than 1 its rotating instantly not smoothly. Here is my code
Vector3 to = new Vector3(0, 0, 90);
public float speed = 0.5f;
void Update()
{
if (Input.GetKeyDown(KeyCode.Space))
{
RotateOne();
}
}
void RotateOne()
{
transform.eulerAngles = Vector3.Lerp(transform.rotation.eulerAngles, to, speed * Time.deltaTime);
to += new Vector3(0, 0, 90);
}
You almost had it ;)
The main issue is that you only rotate once a tiny little bit when you click the key.
You rather want to rotate continously and only increase the target rotation once when you click.
A second issue is you using eulerAngles for a continuous rotation. From the API:
When using the .eulerAngles property to set a rotation, it is important to understand that although you are providing X, Y, and Z rotation values to describe your rotation, those values are not stored in the rotation. Instead, the X, Y & Z values are converted to the Quaternion's internal format.
When you read the .eulerAngles property, Unity converts the Quaternion's internal representation of the rotation to Euler angles. Because, there is more than one way to represent any given rotation using Euler angles, the values you read back out may be quite different from the values you assigned. This can cause confusion if you are trying to gradually increment the values to produce animation.
To avoid these kinds of problems, the recommended way to work with rotations is to avoid relying on consistent results when reading .eulerAngles particularly when attempting to gradually increment a rotation to produce animation. For better ways to achieve this, see the Quaternion * operator.
// In general instead of eulerAngles always prefer calculating with
// Quaternion directly where possible
private Quaternion to;
void Start()
{
to = transform.rotation;
}
void Update()
{
if (Input.GetKeyDown(KeyCode.Space))
{
RotateOne();
}
// You want to do this always, not only in the one frame the key goes down
// Rather use RotateTowards for a linear rotation speed (angle in degrees per second!)
transform.rotation = Quaternion.RotateTowards(transform.rotation, to, speed * Time.deltaTime);
// Or if you still rather want to interpolate
//transform.rotation = Quaternion.Lerp(transform.rotation, to, speed * Time.deltaTime);
}
void RotateOne()
{
to *= Quaternion.Euler(0, 0, 90);
}
NOTE though there will be one little issue with this: The moment you hit the key 3 or 4 times it will suddenly rotate back! This is because RotateTowards and Lerp use both the shortest way towards the target rotation.
In order to fully avoid this in your case you could rather use a Corotuine and stack your inputs like e.g.
private int pendingRotations;
private bool isRotating;
void Update()
{
if (Input.GetKeyDown(KeyCode.Space))
{
pendingRotations++;
if(!isRotating) StartCoroutine(RotateRoutine());
}
}
IEnumerator RotateRoutine()
{
// just in case
if(isRotating) yield break;
isRotating = true;
var targetRotation = transform.rotation * Quaternion.Euler(0, 0, 90);
while (transform.rotation != targetRotation)
{
transform.rotation = Quaternion.RotateTowards(startRotation, targetRotation, speed * Time.deltaTime);
// tells Unity to "pause" the routine here, render this frame
// and continue from here in the next fame
yield return null;
}
// in order to end up with a clean value
transform.rotation = targetRotation;
isRotating = false;
pendingRotations--;
// are there more rotations pending?
if (pendingRotations > 0)
{
// start another routine
StartCoroutine(RotateRoutine());
}
}
Quaternion to = Quaternion.Euler(0,0,90);
transform.rotation = Quaternion.Lerp(transform.rotation, to, speed * Time.deltaTime);
Don't change to and add Time.deltaTime

How to properly rotate a single axis while not touching the others

I have a script where I wish to rotate something on the X-axis, and I'd expect the other two axes to stay put and not be modified(they flip between 0 and 180), and only X to change.
Below you can see the code intended to do just that.
public class TestScript : MonoBehaviour
{
private void Start()
{
//this line is meant to show that I'm reseting the rotation before starting
transform.eulerAngles = Vector3.zero;
Debug.Log($"start rotation: {transform.eulerAngles}");
}
private void Update()
{
float time = (Time.time - 1f) * 10f;
if (time < 0f || time > 1f)
{
return;
}
Vector3 rotation = transform.eulerAngles;
float x = Mathf.Lerp(170f, 0, time);
rotation.x = x;
transform.eulerAngles = rotation;
Debug.Log($"rotation: {transform.eulerAngles}, x: {x}");
}
}
The output from the console. You can clearly see that the rotation does not go from 170 to 0, but from 0 to 90 and then back to 0.
Now, I'm pretty sure this has something to do with quaternions and their identity, but not sure how can this be avoided
PS: Before you answer please read this
I do not want to rotate the whole object by using a Vector. I only want to rotate one axis at a time.
I know that this is due to the fact that quaternions represent a rotation, and there are multiple ways to represent a rotation when converting into euler angles, but that doesn't help me, because I do only want to do one axis rotation and the others not to be modified. At all.
I'm actually trying to do this for the other 2 axes too.
The rotation has to be from one value to another during a specific period of time, rather than rotate a certain amount over an unspecified amount of time.
This script is not actually what I'm trying to achieve but a simplified version of my issue. If any of this is not that clear, check these 2 scripts, where I'm actually trying to achieve this. Abstract Axis Rotate
I would store a Vector3 where all 1-3 TestScript varieties can view and edit it, such that no Transform has more than one of these vector3 associated with it. This could be stored in a dictionary, for instance, where each Transform could be used as a key. There are other ways this could be done but that is out of scope. I'll call the value of this vector3 cache for the sake of simplicity.
You will also need a way for each script to determine which components of the euler angles are not controlled. This can be done in a manner similar to cache and how to implement is also out of scope for this question.
Have each TestScript update cache so that the vector component it controls is set to its current interpolation value. Then have each component assign that cache to the eulerAngles, or optionally, have only the final TestScript make that assignment.
In terms of your question, a good one btw, I'd say 181, 0, 360. I don't really care what you do with the other axes and if your quaternion messes up the rotation.
Since you would want to have this internal state vector not override uncontrolled vector components in the event any outside change occurs, here's what you could do about that:
When the first TestScript variety in a frame executes, compare (using e.g. Quaternion.Angle(a,b) < 0.00001f so that it treats 540,0,0 the same as 180,0,0) the transform's current rotation with the quaternion produced by Quaternion.Euler(cache). If the rotations are determined to be different, assign the uncontrolled components of the transform's eulerAngles to cache and then continue with the usual operation.
Pseudocode, underscore_case stuff is VERY pseudo:
Transform myTransform;
Vector3 startEulers = myTransform.eulerAngles;
Vector3 cache = get_cache(myTransform);
if (is_this_TestScript_first_to_go_this_frame())
{
if (Quaternion.Angle(myTransform.rotation, Quaternion.Euler(cache)) < 0.0001f)
{
if (is_x_uncontrolled(myTransform))
cache.x = startEulers.x;
if (is_y_uncontrolled(myTransform))
cache.y = startEulers.y;
if (is_z_uncontrolled(myTransform))
cache.z = startEulers.z;
set_cache(myTransform, cache);
}
}
switch(my_axis)
{
case X:
cache.x = get_current_interp_val();
break;
case Y:
cache.y = get_current_interp_val();
break;
case Z:
cache.z = get_current_interp_val();
break;
default:
throw exception;
}
myTransform.eulerAngles = cache;
This will get you started toward what you want.
This cannot be achieved due to how Unity interprets quaternions. If you want to do this you might want to go a different route. transform.Rotate(Vector3) did the trick for me. This is not the solution I'm using but this is how the script I originally posted would have to be modified to achieve the same thing.
public class TestScript : MonoBehaviour
{
private float lastX;
private void Start()
{
transform.eulerAngles = Vector3.zero;
lastX = transform.eulerAngles.x;
Debug.Log($"start rotation: {transform.eulerAngles}");
}
private void Update()
{
float time = Time.time - 1f;
if (time < 0 || time > 1f)
{
return;
}
float x = Mathf.Lerp(0f, -170f, time);
Vector3 rotation = new Vector3(x - lastX, 0f, 0f);
lastX = x;
transform.Rotate(rotation);
Debug.Log($"rotation: {transform.eulerAngles}, x: {x}");
}
}

Unity messes up X-axis rotation

I have a very simple script that I wish to rotate something on the X-axis, and I'd expect the other two axes to stay put(they flip between 0 and 180), and only X to change.
Below you can see the code intended to do just that.
public class TestScript : MonoBehaviour
{
private void Start()
{
transform.eulerAngles = Vector3.zero;
Debug.Log($"start rotation: {transform.eulerAngles}");
}
private void Update()
{
float time = (Time.time - 1f) * 10f;
if (time < 0f || time > 1f)
{
return;
}
Vector3 rotation = transform.eulerAngles;
float x = Mathf.Lerp(170f, 0, time);
rotation.x = x;
transform.eulerAngles = rotation;
Debug.Log($"rotation: {transform.eulerAngles}, x: {x}");
}
}
The output from the console. You can clearly see that the rotation does not go from 170 to 0, but from 0 to 90 and then back to 0.
Now, I'm pretty sure this has something to do with quaternions and their identity, but not sure how can this be avoided
PS: The same idea but for Y and Z works just fine.
OP:
I have a very simple script that I wish to rotate something on the X-axis, and I'd expect the other two axes to stay put(they flip between 0 and 180), and only X to change.
You can clearly see that the rotation does not go from 170 to 0, but from 0 to 90 and then back to 0. Now, I'm pretty sure this has something to do with quaternions
Well the real culprit is Euler angles.
If we take a look at your code:
Vector3 rotation = transform.eulerAngles;
float x = Mathf.Lerp(170f, 0, time);
rotation.x = x;
transform.eulerAngles = rotation;
Debug.Log($"rotation: {transform.eulerAngles}, x: {x}");
...we can see you are performing rotations via transform.eulerAngles. The thing about 3D rotations is that you should avoid using Euler due to their limitations and problems (gimbal lock anyone) and use quaternions instead. The latter is the source of truth.
Unity (my emphasis):
When you read the .eulerAngles property, Unity converts the Quaternion's internal representation of the rotation to Euler angles. Because, there is more than one way to represent any given rotation using Euler angles, the values you read back out may be quite different from the values you assigned. This can cause confusion if you are trying to gradually increment the values to produce animation.
...which is exactly what is happening with your code.
Consider this:
Notice anything about the 23.5 and the 156.5?
23.5 + 156.5 = 180
In other words both will lead to the same rotation as per "there is more than one way to represent any given rotation".
An arguable simpler approach is:
public class RotateWithTime : MonoBehaviour
{
[SerializeField,Tooltip("Rotation rate in degrees/second")]
private Vector3 rotationSpeed; // e.g. (30,0,0) for 30 deg/sec X-only
private void Reset()
{
rotationSpeed = Vector3.zero;
}
// Update is called once per frame
void Update()
{
var amount = rotationSpeed * Time.deltaTime;
transform.Rotate(amount);
}
}
And a version without Vector3s:
public class RotateWithTimeNoV3 : MonoBehaviour
{
[SerializeField,Tooltip("Rotation rate in degrees/second")]
private float rotationSpeedX; // e.g. 30 for 30 deg/sec X-only
private void Reset()
{
rotationSpeedX = 0f;
}
// Update is called once per frame
void Update()
{
var amount = rotationSpeedX * Time.deltaTime;
transform.Rotate(amount, 0f, 0f);
}
}
In order to avoid this question becoming a chameleon question due to the lack of info in the initial question, I have asked a new one and will be closing this one. You can find it here

How to determine an android phones viewing direction when starting an app

I'm working on an Augmented Reality app for Android without tracking images/objects. The user stands at a predefined position and virtual objects are placed into the real world. when the user turns around or moves the phone, the objects are fixed at their respective places. I do this by applying the gyroscope data to the camera.
My problem: I want the objects positions to be always fixed to the same places regardless of the users viewing direction when he starts up the app. Right now, on starting the app, the objects are positioned depending on the camera. After that, they are fixed to their places, when the user changes his viewing direction.
I drew an image of what the exact problem is to better elaborate:
I want to know which sensors are relevant to solve this problem. Since Google Maps accurately determines the viewing direction of a user, I assume there are built in sensors to find out in which direction the user is looking in order to apply this information to the camera's rotation at the start.
This is the code I use to apply the phones rotation to the camera (I'm using Unity and C#):
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Gyrotransform : MonoBehaviour
{
// STATE
private float _initialYAngle = 0f;
private float _appliedGyroYAngle = 0f;
private float _calibrationYAngle = 0f;
private Transform _rawGyroRotation;
private float _tempSmoothing;
// SETTINGS
[SerializeField] private float _smoothing = 0.1f;
private IEnumerator Start()
{
Input.gyro.enabled = true;
Application.targetFrameRate = 60;
_initialYAngle = transform.eulerAngles.y;
_rawGyroRotation = new GameObject("GyroRaw").transform;
// _rawGyroRotation.parent = Core.Instance.transform;
_rawGyroRotation.position = transform.position;
_rawGyroRotation.rotation = transform.rotation;
// Wait until gyro is active, then calibrate to reset starting rotation.
yield return new WaitForSeconds(1);
StartCoroutine(CalibrateYAngle());
}
private void Update()
{
ApplyGyroRotation();
ApplyCalibration();
transform.rotation = Quaternion.Slerp(transform.rotation, _rawGyroRotation.rotation, _smoothing);
}
private IEnumerator CalibrateYAngle()
{
_tempSmoothing = _smoothing;
_smoothing = 1;
_calibrationYAngle = _appliedGyroYAngle - _initialYAngle; // Offsets the y angle in case it wasn't 0 at edit time.
yield return null;
_smoothing = _tempSmoothing;
}
private void ApplyGyroRotation()
{
_rawGyroRotation.rotation = Input.gyro.attitude;
_rawGyroRotation.Rotate(0f, 0f, 180f, Space.Self); // Swap "handedness" of quaternion from gyro.
_rawGyroRotation.Rotate(90f, 180f, 0f, Space.World); // Rotate to make sense as a camera pointing out the back of your device.
_appliedGyroYAngle = _rawGyroRotation.eulerAngles.y; // Save the angle around y axis for use in calibration.
}
private void ApplyCalibration()
{
_rawGyroRotation.Rotate(0f, -_calibrationYAngle, 0f, Space.World); // Rotates y angle back however much it deviated when calibrationYAngle was saved.
}
public void SetEnabled(bool value)
{
enabled = true;
StartCoroutine(CalibrateYAngle());
}
}
As far as I understand it the Gyroskope returns the rotational difference since it was started.
That's why your objects appear in the direction you are facing during start.
I guess what you rather want might be Compass.magneticHeading at least for setting the correct rotation once at gamestart
// Orient an object to point to magnetic north.
transform.rotation = Quaternion.Euler(0, -Input.compass.magneticHeading, 0);
You could do this once at start on the parent of all the objects you want to show in order to orient them correctly on the GPS north.

Computationally quickly project 3d object onto 2d plane and get surface area of that (for aerodynamic drag)?

I'm trying to simulate swimming in Unity (using c#) by actually having the movements of the object create drag forces which then propel the object through the liquid.
to do this, I'm using the formula
F = -½ * C * d * velocity squared * A
where C is a coefficient of drag, d is the density of liquid, and A is the object's surface area that faces the direction of motion. A is calculated by projecting the 3D object onto a 2D plane perpendicular to the velocity vector.
Here's an image explaining A:
https://www.real-world-physics-problems.com/images/drag_force_2.png
Now I suspect Unity has a built in way to do this type of projection (since it does that every time there's a camera in the scene).
My question is:
How do I do this? Searches have not helped me with this (unless you're trying to do it with a camera)
Is there a built in function in Unity?
Is this computationally expensive? I am going to be doing this individual for possibly thousands of objects at a time.
I DO NOT need it to be very accurate. I'm just trying to make it a bit realistic, so I want objects with much bigger A to have more drag than ones with much lower A. Slight differences are inconsequential. The objects themselves won't be super complex, but some may have very different areas depending on orientation. So like a cone, for example, could change quite a bit depending on which direction it's moving. I could approximate the A with a simple shape if needed like ellipsoid or rectangle.
If it is computationally expensive, I read a journal article that used a cool way to approximate it. He created a grid of points (which he called voxels) within the objects spaced evenly, which effectively split the object into equal-sized spheres (which always have a cross-sectional surface area of a circle (easy to calculate). Then he calculated the drag force on each of these spheres and added them up to find the total drag (see images).
Images from THESIS REPORT ON: Real-time Physics-based Animation of a
Humanoid Swimmer, Jurgis Pamerneckas, 2014
link https://dspace.library.uu.nl/bitstream/handle/1874/298577/JP-PhysBAnimHumanSwim.pdf?sequence=2
This successfully estimated drag for him. But I see one problem, that the "voxels" that are deep in object are still contributing to drag, where only the ones near the leading edge should be contributing.
So, I thought of a possibility where I could project just the voxel points onto the 2Dplane (perpendicular to velocity) and then find a bounding shape or something, and approximate it that way. I suspect projecting a few points would be faster than projecting a whole 3d object.
this raises a few more questions:
Does this seem like a better method?
How would I create voxels in Unity?
Is it computationally faster?
Any better ideas?
Another thought I had was to do raycasting of some sort, though I can't think of how to do that, perhaps a grid of raycasts parallel to the velocity vector? and just count how many hit to approximate area?
UPDATE
I managed to implement basic drag force by manually typing in the value for A, now I need to approximate A in some way. Even with manual typing, it works surprisingly well for very basic "swimmers". In the image below, the swimmer correctly spins to the right since his left arm is bigger (I gave it double the value for A).
UPDATE 2
Based on #Pierre's comments, I tried computing A for the overall shape using the object's vertices (and also by selecting a few points on the vertices), projecting them onto a plane, and calculating the overall area of the resulting polygon. However, This only calculated the overall drag force on the object. It didn't calculate any rotational drag caused by certain parts of the object moving faster than others. For example, think of a baseball bat swing, the farthest part of the bat will be creating more drag since it's swinging faster than the handle.
This made me go back to the "voxel" idea, since I could calculate local drag sampled at several parts of the object.
I'm playing around with this idea, estimating the voxel's surface area by a circle. But still having a few issues making this estimate relatively accurate. Despite it being inaccurate, this seems to work quite well.
First, I'm using recasts to determine if the voxel can "see" in the direction of the velocity to determine if it's on the leading face of the object. If so, then I take the voxel's local (circular) surface area, and multiplying this by the dot product of the circle's normal and the local velocity vector. This scales the area based on how much it's actually facing the direction of motion.
The inaccuracies so far are due to the circles not actually estimating the local surface area very well, especially for weirdly elongated objects. The further vertices are from each other then the worse the estimation becomes. Any help in this department would be appreciated.
Also, I need to optimize this computationally. Right now, doing it with every vertex is proving to be fairly expensive. I'll keep updating as I progress, and any input would be very helpful! I'll post some code soon once I get a bit farther.
UPDATE 3
I did a fairly accurate implementation using voxels which I manually placed on the surface of the object, and manually estimated the local A when facing that voxel. I then used the dot product to estimate how much of that Area was facing the direction of motion. This worked very well. But the problem then was that even voxels that weren't on the leading edge of the object were contributing to drag. So I used Physics.Raycasts to pop a small distance away from the voxel in the direction of velocity, and then raycast back at the voxel. If this raycast hit the collider of the actual object (not the voxel) it meant it was on the leading edge. This worked fantastically and yielded surprisingly accurate natural looking behaviour of drag. Strangely shaped objects would eventually rotate to minimize drag just like you'd expect. However, as soon as I increased the resolution of voxels and/or added a few more objects into the scene, my frame rate dropped to nearly 3fps. The profiler showed that the brunt of the calculations were due to the raycasting step. I've tried to think of other ways to determine if the voxels are on the leading edge, so far to no avail.
So TLDR, I simulated drag really well, but not in a computationally fast manner.
I never figured out a way to speed up the calculations, but the simulation works great as long as the voxel count is low.
The simulation calculates drag based on the velocity of each voxel. It checks whether it's on the leading edge of the object, and if so applies its drag force.
The code is probably a bit difficult to follow but should at least get you started if you want to try it out. Let me know if you have any questions or need clarifications.
This code is a slightly cleaned up version from my Update#3 above.
In action:
At start of simulation (object moving in straight line towards bottom right of screen)
you can see the force arrows added for visualization and the circles representing the voxels. The force is correctly proportional to the surface area the voxels roughly represent. and only leading edges of the shapes are contributing drag
As the simulation continues, the shape correctly rotates into the most aerodynamic position because of the drag, and the rear sections stop contributing drag.
Drag Enabled Shape Class
this is dragged on main objet (rigidbody) to enable drag. You can either have it create voxels in a spread around a sphere shape. Or load in your own custom Voxels which are game objects with the Voxel Script attached, and are children of this object.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.Linq;
[RequireComponent (typeof(Rigidbody))]
public class DragEnabledShape : MonoBehaviour {
const float EPSILON = 0.0001f;
public Voxel voxelPrefab;
public float C = 1f;
public float d = 0.5f;
public int resolutionFactor = 2;
public float maxDistanceFromCenter = 10f;
public bool displayDragVisualization = false;
public float forceVisualizationMultiplier = 1f;
public bool displayVoxels = false;
public bool loadCustomVoxels = false;
List<Voxel> voxels;
Rigidbody rb;
// Use this for initialization
void Awake () {
voxels = new List<Voxel> ();
rb = GetComponent<Rigidbody> ();
}
void OnEnable () {
if (loadCustomVoxels) {
var customVoxels = GetComponentsInChildren<Voxel> ();
voxels.AddRange (customVoxels);
if (displayDragVisualization) {
foreach (Voxel voxel in customVoxels) {
voxel.DisplayDrag (forceVisualizationMultiplier);
}
}
if (displayVoxels) {
foreach (Voxel voxel in customVoxels) {
voxel.Display ();
}
}
}
else {
foreach (Transform child in GetComponentsInChildren<Transform> ()) {
if (child.GetComponent<Collider> ()) {
//print ("creating voxels of " + child.gameObject.name);
CreateSurfaceVoxels (child);
}
}
}
}
void CreateSurfaceVoxels (Transform body) {
List<Vector3> directionList = new List<Vector3> ();
for (float i = -1; i <= 1 + EPSILON; i += 2f / resolutionFactor) {
for (float j = -1; j <= 1 + EPSILON; j += 2f / resolutionFactor) {
for (float k = -1; k <= 1 + EPSILON; k += 2f / resolutionFactor) {
Vector3 v = new Vector3 (i, j, k);
directionList.Add (v);
}
}
}
//float runningTotalVoxelArea = 0;
foreach (Vector3 direction in directionList) {
Ray upRay = new Ray (body.position, direction).Reverse (maxDistanceFromCenter);
RaycastHit[] hits = Physics.RaycastAll (upRay, maxDistanceFromCenter);
if (hits.Length > 0) {
//print ("Aiming for " + body.gameObject.name + "and hit count: " + hits.Length);
foreach (RaycastHit hit in hits) {
if (hit.collider == body.GetComponent<Collider> ()) {
//if (GetComponentsInParent<Transform> ().Contains (hit.transform)) {
//print ("hit " + body.gameObject.name);
GameObject empty = new GameObject ();
empty.name = "Voxels";
empty.transform.parent = body;
empty.transform.localPosition = Vector3.zero;
GameObject newVoxelObject = Instantiate (voxelPrefab.gameObject, empty.transform);
Voxel newVoxel = newVoxelObject.GetComponent<Voxel> ();
voxels.Add (newVoxel);
newVoxel.transform.position = hit.point;
newVoxel.transform.rotation = Quaternion.LookRotation (hit.normal);
newVoxel.DetermineTotalSurfaceArea (hit.distance - maxDistanceFromCenter, resolutionFactor);
newVoxel.attachedToCollider = body.GetComponent<Collider> ();
if (displayDragVisualization) {
newVoxel.DisplayDrag (forceVisualizationMultiplier);
}
if (displayVoxels) {
newVoxel.Display ();
}
//runningTotalVoxelArea += vox.TotalSurfaceArea;
//newVoxel.GetComponent<FixedJoint> ().connectedBody = shape.GetComponent<Rigidbody> ();
}
else {
//print ("missed " + body.gameObject.name + "but hit " + hit.transform.gameObject.name);
}
}
}
}
}
void FixedUpdate () {
foreach (Voxel voxel in voxels) {
rb.AddForceAtPosition (voxel.GetDrag (), voxel.transform.position);
}
}
}
Voxel class
This script is attached to small gameObjects placed around a shape. They represent the locations at which drag is computed. SO for complex shapes these should be at any extremities, and should be fairly spread out over the object. The voxel object's rigid body's mass should approximate the portion of the object this voxel represents.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Voxel : MonoBehaviour {
Vector3 velocity;
public Collider attachedToCollider;
Vector3 drag;
public Vector3 Drag {
get {
return drag;
}
}
float dragMagnitude;
public float DragMagnitude {
get {
return dragMagnitude;
}
}
bool leadingEdge;
public bool LeadingEdge {
get {
return leadingEdge;
}
}
bool firstUpdate = true;
public float localSurfaceArea;
Vector3 prevPos;
public VoxelForceVisualizer forceVisualizer;
public VoxelVisualizer voxelVisualizer;
const float AREA_COEFFICIENT = 1.1f;
const float EPSILON = 0.001f;
const float FAR_DISTANCE = 5f;
const float MAX_FORCE = 100f;
public void DetermineTotalSurfaceArea (float distanceFromCenter, float resolution) {
float theta = (Mathf.PI / 4) / resolution;
float localR = distanceFromCenter * Mathf.Tan (theta) * AREA_COEFFICIENT;// * (resolution / 0.01f);
localSurfaceArea = Mathf.PI * localR * localR;
}
bool IsVisibleFromPlane () {
if (attachedToCollider == null) {
throw new MissingReferenceException ("attached to collider not set");
}
bool visibleFromPlane = false;
//checks if this is leading edge of this part of object.
Ray justOutsideSurface = new Ray (this.transform.position, velocity).Reverse (EPSILON);
RaycastHit hit;
if (Physics.Raycast (justOutsideSurface, out hit, EPSILON * 2f)) {
if (hit.collider == attachedToCollider) {
//checks if other parts of this object are in front, blocking airflow.
//Ray wayOutsideSurface = new Ray (this.transform.position, velocity).Reverse (FAR_DISTANCE);
//RaycastHit firstHit;
//if (Physics.Raycast (wayOutsideSurface, out firstHit, FAR_DISTANCE * 2f)) {
//if (firstHit.collider == attachedToCollider) {
visibleFromPlane = true;
//}
//}
}
}
//}
leadingEdge = visibleFromPlane;
return visibleFromPlane;
}
void FixedUpdate () {
if (firstUpdate) {
prevPos = transform.position;
firstUpdate = false;
}
velocity = (transform.position - prevPos) / Time.deltaTime;
prevPos = transform.position;
}
public Vector3 GetDrag () {
if (IsVisibleFromPlane ()) {
float alignment = Vector3.Dot (velocity, this.transform.forward);
float A = alignment * localSurfaceArea;
dragMagnitude = DragForce.Calculate (velocity.sqrMagnitude, A);
//This clamp is necessary for imperfections in velocity calculation, especially with joint limits!
//dragMagnitude = Mathf.Clamp (dragMagnitude, 0f, MAX_FORCE);
drag = -velocity * dragMagnitude;
}
return drag;
}
public void Display () {
voxelVisualizer.gameObject.SetActive (true);
}
public void TurnOffDisplay () {
voxelVisualizer.gameObject.SetActive (false);
}
public void DisplayDrag (float forceMultiplier) {
forceVisualizer.gameObject.SetActive (true);
forceVisualizer.multiplier = forceMultiplier;
}
public void TurnOffDragDisplay () {
forceVisualizer.gameObject.SetActive (false);
}
}
VoxelForceVisualizer
This is a attached to prefab of a thin arrow that I put as a child of the voxels to allow force arrows to be drawn during debugging the drag force.
using UnityEngine;
public class VoxelForceVisualizer : MonoBehaviour {
const float TINY_NUMBER = 0.00000001f;
public Voxel voxel;
public float drag;
public float multiplier;
void Start () {
voxel = this.GetComponentInParent<Voxel> ();
}
// Update is called once per frame
void Update () {
Vector3 rescale;
if (voxel.LeadingEdge && voxel.Drag != Vector3.zero) {
this.transform.rotation = Quaternion.LookRotation (voxel.Drag);
rescale = new Vector3 (1f, 1f, voxel.DragMagnitude * multiplier);
}
else {
rescale = Vector3.zero;
}
this.transform.localScale = rescale;
drag = voxel.DragMagnitude;
}
}
VoxelVisualizer
this is attached to a small sphere object as a child of the voxel empty. It's just to see where the voxels are, and let the above scripts show/hide the voxels without disabling the drag force calculations.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class VoxelVisualizer : MonoBehaviour {
}
DragForce
This calculates the drag force
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public static class DragForce {
const float EPSILON = 0.000001f;
public static float Calculate (float coefficient, float density, float vsq, float A) {
float f = coefficient * density * vsq * A;
return f;
}
public static float Calculate (float vsq, float A) {
return Calculate (1f, 1f, vsq, A);
}
}

Categories

Resources