I want to make a 3D game, but when I press a button in this game, I want it to switch to a 2D image.
Is it possible to trigger this by pressing a key with code, just like we clicked on the 2D button in unity?
The main difference between a 2D and a 3D scene is in the camera projection, usually in 2D projects the camera is orthographic.
Now to solve this question, you have to switch the camera between prespective and orthographic modes. This process begins with saving the camera matrices.
private Camera cam;
private Matrix4x4 project2D;
private Matrix4x4 project3D;
public void Start()
{
cam = GetComponent<Camera>();
var temp = cam.orthographic;
cam.orthographic = true;
project2D = cam.projectionMatrix;
cam.orthographic = false;
project3D = cam.projectionMatrix;
cam.orthographic = temp;
}
In the above code we saved the camera matrix information in these two modes, now you need to create a way to smoothly switch between the two modes with the following code. This script is called MatrixLerp and it performs this task.
private Matrix4x4 MatrixLerp(Matrix4x4 from, Matrix4x4 to, float t)
{
t = Mathf.Clamp(t, 0.0f, 1.0f);
var newMatrix = new Matrix4x4();
newMatrix.SetRow(0, Vector4.Lerp(from.GetRow(0), to.GetRow(0), t));
newMatrix.SetRow(1, Vector4.Lerp(from.GetRow(1), to.GetRow(1), t));
newMatrix.SetRow(2, Vector4.Lerp(from.GetRow(2), to.GetRow(2), t));
newMatrix.SetRow(3, Vector4.Lerp(from.GetRow(3), to.GetRow(3), t));
return newMatrix;
}
In the next step, this IEnumerator will change the projection modes by changing the progress value. In the following Update() codes, you can switch between the two modes by pressing keys 1 and 2. Just insert the switch methods into any other code you want to solve this problem.
public float projectTime = 1f;
private IEnumerator SwitchProjection(Matrix4x4 projectTo)
{
var progress = 0f;
var currentProject = cam.projectionMatrix;
while (progress < 1)
{
progress += Time.deltaTime/projectTime;
cam.projectionMatrix = MatrixLerp(currentProject, projectTo, progress);
yield return null;
}
cam.orthographic = projectTo == project2D;
}
void Update()
{
if (Input.GetKeyDown(KeyCode.Alpha1))
{
StartCoroutine(SwitchProjection(project3D));
}
if (Input.GetKeyDown(KeyCode.Alpha2))
{
StartCoroutine(SwitchProjection(project2D));
}
}
There is, in and of itself, no difference between 2D and 3D "view" in Unity. Everything in Unity exists in a 3D space at all times, so there is no real switching. However there are some tricks you could do to make it look like you are switching between those two.
What you could do is
Change the camera angle so everything is seen a different angle instead of the only from the front
Replace all your 2D sprites with equivalent 3D objects (Replacing a square with a cube etc), or simply having everything as 3D objects from the start.
Related
I have a setup similar to this mockup in my 3D Unity game.
Each object has a collider as well as a rigid body. I am moving the object using mouse using simple mouse input events
private void OnMouseDown()
{
isSelected = true;
}
private void OnMouseDrag()
{
if (!isSelected) return;
Vector3 touchLocation = InputController.Instance.getTapLocation(); // Gets touch/mouse location using Raycast
_transform.position = track.GetPositionOnPath(touchLocation); //Uses vector projection math
}
private void OnMouseUp()
{
if (isSelected)
isSelected = false;
}
The object movement is constrained to a path using Vector projection
public Vector3 GetPositionOnPath(Vector3 touchLocation)
{
//Make the vector for the track u
Vector3 u = end.position - start.position;
//Make the touch location vector from the track start position v
Vector3 v = touchLocation - start.position;
float uv = Vector3.Dot(u, v);
float uMagSqr = Mathf.Pow(u.magnitude, 2);
Vector3 p = (uv / uMagSqr) * u; //Projection Formula (u.v)/(|u|^2) * u
Vector3 finalVector = start.position + p;
//Clamp the final vector to the end.positions of the track
if (start.position.x > end.position.x)
finalVector.x = Mathf.Clamp(finalVector.x, end.position.x, start.position.x);
else
finalVector.x = Mathf.Clamp(finalVector.x, start.position.x, end.position.x);
if (start.position.z > end.position.z)
finalVector.z = Mathf.Clamp(finalVector.z, end.position.z, start.position.z);
else
finalVector.z = Mathf.Clamp(finalVector.z, start.position.z, end.position.z);
return finalVector;
}
I want to make it in such a way that other objects in the way would obstruct the movement of the selected object as shown in the mockup. The solutions I thought of were to increase the weights of the objects so that they won't fly away when they collide, or to keep every object kinematic except the currently selected one. But they both have the same issue that the objects simply slide through each other.
It seems like a simple thing to do, but I can't seem to find any resources to help me out. I would really appreciate some help on how to tackle this problem and implement the mechanic as I intend. Thanks in Advance.
I have yet to find the exact solution I am looking for. But since it is highly discouraged to move the position of a rigid body object, I switched over to a physics-based movement based on information I found on this and this thread. Here is my current implementation:
private void OnMouseDown()
{
isSelected = true;
_rigidbody.isKinematic = false;
}
private void OnMouseDrag()
{
if (!isSelected) return;
Vector3 target = InputController.Instance.getTapLocation();
//_rigidbody.MovePosition(target);
force = _track.GetPositionOnPath(target) - _transform.position;
}
private void OnMouseUp()
{
if (!isSelected) return;
isSelected = false;
_rigidbody.isKinematic = true;
force = Vector3.zero;
}
private void FixedUpdate()
{
_rigidbody.velocity = force;
}
However, this comes at the cost of the movement not being perfectly responsive to the input given by the player as shown in this gif.
This is a fix so I am posting this as an answer but any other better options are much appreciated.
I'm working on an Augmented Reality app for Android without tracking images/objects. The user stands at a predefined position and virtual objects are placed into the real world. when the user turns around or moves the phone, the objects are fixed at their respective places. I do this by applying the gyroscope data to the camera.
My problem: I want the objects positions to be always fixed to the same places regardless of the users viewing direction when he starts up the app. Right now, on starting the app, the objects are positioned depending on the camera. After that, they are fixed to their places, when the user changes his viewing direction.
I drew an image of what the exact problem is to better elaborate:
I want to know which sensors are relevant to solve this problem. Since Google Maps accurately determines the viewing direction of a user, I assume there are built in sensors to find out in which direction the user is looking in order to apply this information to the camera's rotation at the start.
This is the code I use to apply the phones rotation to the camera (I'm using Unity and C#):
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Gyrotransform : MonoBehaviour
{
// STATE
private float _initialYAngle = 0f;
private float _appliedGyroYAngle = 0f;
private float _calibrationYAngle = 0f;
private Transform _rawGyroRotation;
private float _tempSmoothing;
// SETTINGS
[SerializeField] private float _smoothing = 0.1f;
private IEnumerator Start()
{
Input.gyro.enabled = true;
Application.targetFrameRate = 60;
_initialYAngle = transform.eulerAngles.y;
_rawGyroRotation = new GameObject("GyroRaw").transform;
// _rawGyroRotation.parent = Core.Instance.transform;
_rawGyroRotation.position = transform.position;
_rawGyroRotation.rotation = transform.rotation;
// Wait until gyro is active, then calibrate to reset starting rotation.
yield return new WaitForSeconds(1);
StartCoroutine(CalibrateYAngle());
}
private void Update()
{
ApplyGyroRotation();
ApplyCalibration();
transform.rotation = Quaternion.Slerp(transform.rotation, _rawGyroRotation.rotation, _smoothing);
}
private IEnumerator CalibrateYAngle()
{
_tempSmoothing = _smoothing;
_smoothing = 1;
_calibrationYAngle = _appliedGyroYAngle - _initialYAngle; // Offsets the y angle in case it wasn't 0 at edit time.
yield return null;
_smoothing = _tempSmoothing;
}
private void ApplyGyroRotation()
{
_rawGyroRotation.rotation = Input.gyro.attitude;
_rawGyroRotation.Rotate(0f, 0f, 180f, Space.Self); // Swap "handedness" of quaternion from gyro.
_rawGyroRotation.Rotate(90f, 180f, 0f, Space.World); // Rotate to make sense as a camera pointing out the back of your device.
_appliedGyroYAngle = _rawGyroRotation.eulerAngles.y; // Save the angle around y axis for use in calibration.
}
private void ApplyCalibration()
{
_rawGyroRotation.Rotate(0f, -_calibrationYAngle, 0f, Space.World); // Rotates y angle back however much it deviated when calibrationYAngle was saved.
}
public void SetEnabled(bool value)
{
enabled = true;
StartCoroutine(CalibrateYAngle());
}
}
As far as I understand it the Gyroskope returns the rotational difference since it was started.
That's why your objects appear in the direction you are facing during start.
I guess what you rather want might be Compass.magneticHeading at least for setting the correct rotation once at gamestart
// Orient an object to point to magnetic north.
transform.rotation = Quaternion.Euler(0, -Input.compass.magneticHeading, 0);
You could do this once at start on the parent of all the objects you want to show in order to orient them correctly on the GPS north.
Im working on a project that I want to create a power up effect whenever the button "Q" is pressed, I have the animation working and the character, I also have the spawning objects around my player that I want to spawn (See Figure below)
My question is how to add different gravity on each rock (spawning object).
Here is the script that I'm currently using.
/* Public Variables Declaration */
public Transform spawn_LocationForSmall;
public Transform spawn_LocationForMedium;
public Transform spawn_LocationForLarge;
public GameObject smallRock_Prefab;
public GameObject mediumRock_Prefab;
public GameObject largeRock_Prefab;
/* Private Variables Declaration */
private GameObject[] smallRocks_List;
private float posX, posY, posZ;
private bool smallCount = false;
private bool mediumCount = false;
private bool largeCount = false;
private bool small_CheckPos = false;
private bool medium_CheckPos = false;
private bool large_CheckPos = false;
void Start() {
//smallRocks_List = GameObject.FindGameObjectsWithTag("smallRock");
Create_Small_Rocks();
Create_Medium_Rocks();
Create_Large_Rocks();
}
private void Create_Small_Rocks(){
for(int i=0; i<=20; i++){
small_CheckPos = false;
posX = this.transform.position.x + Random.Range(-3.0f, 3.0f);
posY = this.transform.position.y + Random.Range(-3.0f, 3.0f);
posZ = this.transform.position.z + Random.Range(-3.0f, 3.0f);
if(posX > 3f && posY > 3f){
small_CheckPos = true;
}
if (small_CheckPos == true) {
Vector3 newPos = new Vector3(posX, posY, posZ);
GameObject createdObject = GameObject.Instantiate(smallRock_Prefab,
newPos, spawn_LocationForSmall.rotation) as GameObject;
createdObject.transform.parent = spawn_LocationForSmall.transform;
}
}
smallCount = true;
}
/* the other two functions are similar to this */
I don't really know if you can change the gravity for each individual, but you can change these things:
Mass:
In the Rigidbody component, there is a "Mass" components at the top. As in the Unity Documentation says: "Higher mass objects push lower mass objects more when colliding. Think of a big truck, hitting a small car." However, it doesn't change how fast an object falls.
Physics Material:
In the Collider components, you should see something called "Material". You can create new physics materials and edit them randomly to make the friction between the rock and the surface higher or lower, and change the bounciness of rocks that way.
Constant Force:
If you want some objects to fall faster, you might want to use this component. I personally never used this before, but it looks great for your problem. You can add a constant force to an object with this component, so if you add some downwards force on your rocks it should help them get down faster.
Please let me know if any of these helped.
Search for Particle Systems :
1) https://docs.unity3d.com/ScriptReference/ParticleSystem.html
2) https://www.youtube.com/watch?v=FEA1wTMJAR0&t=536s
3) https://www.youtube.com/watch?v=xenW67bXTgM
It allows you to upload cool effects or even prefabs as the clone objects (in this case rocks/asteroids). Its also able to control the spawning speed/ amount/ velosity/ (random)size/ physics(gravity)
First, I want you to understand my English.
I manually change the camera's projection to orthographic using the source code.
Please refer to the code below.
using UnityEngine;
using System.Collections;
public class CameraOrthoController : MonoBehaviour
{
private Matrix4x4 ortho;
private Matrix4x4 perspective;
public float near = 0.001f;
public float far = 1000f;
private float aspect;
public static CameraOrthoController Instance
{
get
{
return instance;
}
set { }
}
//-----------------------------------------------------
private static CameraOrthoController instance = null;
//---------------------------------------------------
// Use this for initialization
void Awake()
{
if (instance)
{
DestroyImmediate(gameObject);
return;
}
// 이 인스턴스를 유효한 유일 오브젝트로 만든다
instance = this;
}
private void Start()
{
perspective = Camera.main.projectionMatrix;
}
public void StartMatrixBlender(float OrthoSize)
{
aspect = (Screen.width + 0.0f) / (Screen.height + 0.0f);
if (OrthoSize != 0f)
{
float vertical = OrthoSize;
float horizontal = (vertical * 16f) / 9f;
ortho = Matrix4x4.Ortho(-horizontal, horizontal, -vertical, vertical, near, far);
BlendToMatrix(ortho, 1f);
}
else
{
BlendToMatrix(perspective, 1f);
}
}
//---------------------------------------
private Matrix4x4 MatrixLerp(Matrix4x4 from, Matrix4x4 to, float time)
{
Matrix4x4 ret = new Matrix4x4();
int i;
for (i = 0; i < 16; i++)
ret[i] = Mathf.Lerp(from[i], to[i], time);
return ret;
}
IEnumerator LerpFromTo(Matrix4x4 src, Matrix4x4 dest, float duration)
{
float startTime = Time.time;
while (Time.time - startTime < duration)
{
Camera.main.projectionMatrix = MatrixLerp(src, dest, (Time.time - startTime) / duration);
yield return new WaitForSeconds(0f);
}
Camera.main.projectionMatrix = dest;
}
//-------------------------------------------------
private Coroutine BlendToMatrix(Matrix4x4 targetMatrix, float duration)
{
StopAllCoroutines();
return StartCoroutine(LerpFromTo(Camera.main.projectionMatrix, targetMatrix, duration));
}
//-------------------------------------------------
public void OnEvent(EVENT_TYPE Event_Type, Component Sender, object Param = null, object Param2 = null)
{
switch (Event_Type)
{
}
}
}
I use code this way.
CameraOrthoController.Instance.StartMatrixBlender(OrthographicSize);
This has worked well so far.
However, the problem occurred when i added particle system for effect.
The screen where the problem is occurring
In a normal state, the effect appears in front of the gameobject, as shown on the scene screen at the bottom of the picture above.
But if I use the code I wrote above to manipulate the camera, the effect will always be obscured by all gameobject, as if it were on the game screen at the top of the picture. Despite the fact that the effects are located in front of the game object.
At first, I thought it would be possible to solve it with layer sorting, but I don't think it's a layer problem because it's visible under normal camera conditions.
I want to know where the problem is with the above codes because I have to use them.
Please let me know if you know how to solve it.
Thank you.
When you modify Camera.projectionMatrix, the camera will no longer update its rendering based on the field of view. The particle will remain behind the GameObject until you call Camera.ResetProjectionMatrix() which ends the effect off setting the Camera.projectionMatrix property.
If this doesn't work, use multiple cameras to make the particle system always appear on top of the 3D object. Basically, you render the 3D Object and other objects with the main camera then render the Particle System with another camera.
Layer:
1.Create new layer and name it "Particle"
2.Change the Particle System layer to Particle
Main Camera:
1.Make sure that the main camera's Clear Flags is set to Skybox.
2.Change the Culling Mask to "Everything". Click on Everything which is a setting of Culling Mask and de-select/uncheck Particle.
3.Make sure that its Depth is set to 0.
The camera should not render the Particle System at this point.
New Camera:
1.Create new Camera. Make sure it's at the-same position/rotation as the main camera. Also remove the AudioListener that is attached to it.
2.Change Clear Flags to Depth only.
3.Change the Culling Mask of the camera to be Particle and make sure that nothing else is selected in the "Culling Mask"
4.Change Depth to 1.
This will make the Particle System to always display on top of every object rendered with the first or main camera.
If you want the Particle System to appear on top of a Sprite/2d Object instead of Mesh/3D Object, change the sortingOrder of the particle's Renderer to be bigger than the SpriteRenderer's sortingOrder. The default is 0 so changing the Particle's sortingOrder to 1 or 2 should be fine.
particle.GetComponent<Renderer>().sortingOrder = 2;
I'm trying to simulate swimming in Unity (using c#) by actually having the movements of the object create drag forces which then propel the object through the liquid.
to do this, I'm using the formula
F = -½ * C * d * velocity squared * A
where C is a coefficient of drag, d is the density of liquid, and A is the object's surface area that faces the direction of motion. A is calculated by projecting the 3D object onto a 2D plane perpendicular to the velocity vector.
Here's an image explaining A:
https://www.real-world-physics-problems.com/images/drag_force_2.png
Now I suspect Unity has a built in way to do this type of projection (since it does that every time there's a camera in the scene).
My question is:
How do I do this? Searches have not helped me with this (unless you're trying to do it with a camera)
Is there a built in function in Unity?
Is this computationally expensive? I am going to be doing this individual for possibly thousands of objects at a time.
I DO NOT need it to be very accurate. I'm just trying to make it a bit realistic, so I want objects with much bigger A to have more drag than ones with much lower A. Slight differences are inconsequential. The objects themselves won't be super complex, but some may have very different areas depending on orientation. So like a cone, for example, could change quite a bit depending on which direction it's moving. I could approximate the A with a simple shape if needed like ellipsoid or rectangle.
If it is computationally expensive, I read a journal article that used a cool way to approximate it. He created a grid of points (which he called voxels) within the objects spaced evenly, which effectively split the object into equal-sized spheres (which always have a cross-sectional surface area of a circle (easy to calculate). Then he calculated the drag force on each of these spheres and added them up to find the total drag (see images).
Images from THESIS REPORT ON: Real-time Physics-based Animation of a
Humanoid Swimmer, Jurgis Pamerneckas, 2014
link https://dspace.library.uu.nl/bitstream/handle/1874/298577/JP-PhysBAnimHumanSwim.pdf?sequence=2
This successfully estimated drag for him. But I see one problem, that the "voxels" that are deep in object are still contributing to drag, where only the ones near the leading edge should be contributing.
So, I thought of a possibility where I could project just the voxel points onto the 2Dplane (perpendicular to velocity) and then find a bounding shape or something, and approximate it that way. I suspect projecting a few points would be faster than projecting a whole 3d object.
this raises a few more questions:
Does this seem like a better method?
How would I create voxels in Unity?
Is it computationally faster?
Any better ideas?
Another thought I had was to do raycasting of some sort, though I can't think of how to do that, perhaps a grid of raycasts parallel to the velocity vector? and just count how many hit to approximate area?
UPDATE
I managed to implement basic drag force by manually typing in the value for A, now I need to approximate A in some way. Even with manual typing, it works surprisingly well for very basic "swimmers". In the image below, the swimmer correctly spins to the right since his left arm is bigger (I gave it double the value for A).
UPDATE 2
Based on #Pierre's comments, I tried computing A for the overall shape using the object's vertices (and also by selecting a few points on the vertices), projecting them onto a plane, and calculating the overall area of the resulting polygon. However, This only calculated the overall drag force on the object. It didn't calculate any rotational drag caused by certain parts of the object moving faster than others. For example, think of a baseball bat swing, the farthest part of the bat will be creating more drag since it's swinging faster than the handle.
This made me go back to the "voxel" idea, since I could calculate local drag sampled at several parts of the object.
I'm playing around with this idea, estimating the voxel's surface area by a circle. But still having a few issues making this estimate relatively accurate. Despite it being inaccurate, this seems to work quite well.
First, I'm using recasts to determine if the voxel can "see" in the direction of the velocity to determine if it's on the leading face of the object. If so, then I take the voxel's local (circular) surface area, and multiplying this by the dot product of the circle's normal and the local velocity vector. This scales the area based on how much it's actually facing the direction of motion.
The inaccuracies so far are due to the circles not actually estimating the local surface area very well, especially for weirdly elongated objects. The further vertices are from each other then the worse the estimation becomes. Any help in this department would be appreciated.
Also, I need to optimize this computationally. Right now, doing it with every vertex is proving to be fairly expensive. I'll keep updating as I progress, and any input would be very helpful! I'll post some code soon once I get a bit farther.
UPDATE 3
I did a fairly accurate implementation using voxels which I manually placed on the surface of the object, and manually estimated the local A when facing that voxel. I then used the dot product to estimate how much of that Area was facing the direction of motion. This worked very well. But the problem then was that even voxels that weren't on the leading edge of the object were contributing to drag. So I used Physics.Raycasts to pop a small distance away from the voxel in the direction of velocity, and then raycast back at the voxel. If this raycast hit the collider of the actual object (not the voxel) it meant it was on the leading edge. This worked fantastically and yielded surprisingly accurate natural looking behaviour of drag. Strangely shaped objects would eventually rotate to minimize drag just like you'd expect. However, as soon as I increased the resolution of voxels and/or added a few more objects into the scene, my frame rate dropped to nearly 3fps. The profiler showed that the brunt of the calculations were due to the raycasting step. I've tried to think of other ways to determine if the voxels are on the leading edge, so far to no avail.
So TLDR, I simulated drag really well, but not in a computationally fast manner.
I never figured out a way to speed up the calculations, but the simulation works great as long as the voxel count is low.
The simulation calculates drag based on the velocity of each voxel. It checks whether it's on the leading edge of the object, and if so applies its drag force.
The code is probably a bit difficult to follow but should at least get you started if you want to try it out. Let me know if you have any questions or need clarifications.
This code is a slightly cleaned up version from my Update#3 above.
In action:
At start of simulation (object moving in straight line towards bottom right of screen)
you can see the force arrows added for visualization and the circles representing the voxels. The force is correctly proportional to the surface area the voxels roughly represent. and only leading edges of the shapes are contributing drag
As the simulation continues, the shape correctly rotates into the most aerodynamic position because of the drag, and the rear sections stop contributing drag.
Drag Enabled Shape Class
this is dragged on main objet (rigidbody) to enable drag. You can either have it create voxels in a spread around a sphere shape. Or load in your own custom Voxels which are game objects with the Voxel Script attached, and are children of this object.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using System.Linq;
[RequireComponent (typeof(Rigidbody))]
public class DragEnabledShape : MonoBehaviour {
const float EPSILON = 0.0001f;
public Voxel voxelPrefab;
public float C = 1f;
public float d = 0.5f;
public int resolutionFactor = 2;
public float maxDistanceFromCenter = 10f;
public bool displayDragVisualization = false;
public float forceVisualizationMultiplier = 1f;
public bool displayVoxels = false;
public bool loadCustomVoxels = false;
List<Voxel> voxels;
Rigidbody rb;
// Use this for initialization
void Awake () {
voxels = new List<Voxel> ();
rb = GetComponent<Rigidbody> ();
}
void OnEnable () {
if (loadCustomVoxels) {
var customVoxels = GetComponentsInChildren<Voxel> ();
voxels.AddRange (customVoxels);
if (displayDragVisualization) {
foreach (Voxel voxel in customVoxels) {
voxel.DisplayDrag (forceVisualizationMultiplier);
}
}
if (displayVoxels) {
foreach (Voxel voxel in customVoxels) {
voxel.Display ();
}
}
}
else {
foreach (Transform child in GetComponentsInChildren<Transform> ()) {
if (child.GetComponent<Collider> ()) {
//print ("creating voxels of " + child.gameObject.name);
CreateSurfaceVoxels (child);
}
}
}
}
void CreateSurfaceVoxels (Transform body) {
List<Vector3> directionList = new List<Vector3> ();
for (float i = -1; i <= 1 + EPSILON; i += 2f / resolutionFactor) {
for (float j = -1; j <= 1 + EPSILON; j += 2f / resolutionFactor) {
for (float k = -1; k <= 1 + EPSILON; k += 2f / resolutionFactor) {
Vector3 v = new Vector3 (i, j, k);
directionList.Add (v);
}
}
}
//float runningTotalVoxelArea = 0;
foreach (Vector3 direction in directionList) {
Ray upRay = new Ray (body.position, direction).Reverse (maxDistanceFromCenter);
RaycastHit[] hits = Physics.RaycastAll (upRay, maxDistanceFromCenter);
if (hits.Length > 0) {
//print ("Aiming for " + body.gameObject.name + "and hit count: " + hits.Length);
foreach (RaycastHit hit in hits) {
if (hit.collider == body.GetComponent<Collider> ()) {
//if (GetComponentsInParent<Transform> ().Contains (hit.transform)) {
//print ("hit " + body.gameObject.name);
GameObject empty = new GameObject ();
empty.name = "Voxels";
empty.transform.parent = body;
empty.transform.localPosition = Vector3.zero;
GameObject newVoxelObject = Instantiate (voxelPrefab.gameObject, empty.transform);
Voxel newVoxel = newVoxelObject.GetComponent<Voxel> ();
voxels.Add (newVoxel);
newVoxel.transform.position = hit.point;
newVoxel.transform.rotation = Quaternion.LookRotation (hit.normal);
newVoxel.DetermineTotalSurfaceArea (hit.distance - maxDistanceFromCenter, resolutionFactor);
newVoxel.attachedToCollider = body.GetComponent<Collider> ();
if (displayDragVisualization) {
newVoxel.DisplayDrag (forceVisualizationMultiplier);
}
if (displayVoxels) {
newVoxel.Display ();
}
//runningTotalVoxelArea += vox.TotalSurfaceArea;
//newVoxel.GetComponent<FixedJoint> ().connectedBody = shape.GetComponent<Rigidbody> ();
}
else {
//print ("missed " + body.gameObject.name + "but hit " + hit.transform.gameObject.name);
}
}
}
}
}
void FixedUpdate () {
foreach (Voxel voxel in voxels) {
rb.AddForceAtPosition (voxel.GetDrag (), voxel.transform.position);
}
}
}
Voxel class
This script is attached to small gameObjects placed around a shape. They represent the locations at which drag is computed. SO for complex shapes these should be at any extremities, and should be fairly spread out over the object. The voxel object's rigid body's mass should approximate the portion of the object this voxel represents.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Voxel : MonoBehaviour {
Vector3 velocity;
public Collider attachedToCollider;
Vector3 drag;
public Vector3 Drag {
get {
return drag;
}
}
float dragMagnitude;
public float DragMagnitude {
get {
return dragMagnitude;
}
}
bool leadingEdge;
public bool LeadingEdge {
get {
return leadingEdge;
}
}
bool firstUpdate = true;
public float localSurfaceArea;
Vector3 prevPos;
public VoxelForceVisualizer forceVisualizer;
public VoxelVisualizer voxelVisualizer;
const float AREA_COEFFICIENT = 1.1f;
const float EPSILON = 0.001f;
const float FAR_DISTANCE = 5f;
const float MAX_FORCE = 100f;
public void DetermineTotalSurfaceArea (float distanceFromCenter, float resolution) {
float theta = (Mathf.PI / 4) / resolution;
float localR = distanceFromCenter * Mathf.Tan (theta) * AREA_COEFFICIENT;// * (resolution / 0.01f);
localSurfaceArea = Mathf.PI * localR * localR;
}
bool IsVisibleFromPlane () {
if (attachedToCollider == null) {
throw new MissingReferenceException ("attached to collider not set");
}
bool visibleFromPlane = false;
//checks if this is leading edge of this part of object.
Ray justOutsideSurface = new Ray (this.transform.position, velocity).Reverse (EPSILON);
RaycastHit hit;
if (Physics.Raycast (justOutsideSurface, out hit, EPSILON * 2f)) {
if (hit.collider == attachedToCollider) {
//checks if other parts of this object are in front, blocking airflow.
//Ray wayOutsideSurface = new Ray (this.transform.position, velocity).Reverse (FAR_DISTANCE);
//RaycastHit firstHit;
//if (Physics.Raycast (wayOutsideSurface, out firstHit, FAR_DISTANCE * 2f)) {
//if (firstHit.collider == attachedToCollider) {
visibleFromPlane = true;
//}
//}
}
}
//}
leadingEdge = visibleFromPlane;
return visibleFromPlane;
}
void FixedUpdate () {
if (firstUpdate) {
prevPos = transform.position;
firstUpdate = false;
}
velocity = (transform.position - prevPos) / Time.deltaTime;
prevPos = transform.position;
}
public Vector3 GetDrag () {
if (IsVisibleFromPlane ()) {
float alignment = Vector3.Dot (velocity, this.transform.forward);
float A = alignment * localSurfaceArea;
dragMagnitude = DragForce.Calculate (velocity.sqrMagnitude, A);
//This clamp is necessary for imperfections in velocity calculation, especially with joint limits!
//dragMagnitude = Mathf.Clamp (dragMagnitude, 0f, MAX_FORCE);
drag = -velocity * dragMagnitude;
}
return drag;
}
public void Display () {
voxelVisualizer.gameObject.SetActive (true);
}
public void TurnOffDisplay () {
voxelVisualizer.gameObject.SetActive (false);
}
public void DisplayDrag (float forceMultiplier) {
forceVisualizer.gameObject.SetActive (true);
forceVisualizer.multiplier = forceMultiplier;
}
public void TurnOffDragDisplay () {
forceVisualizer.gameObject.SetActive (false);
}
}
VoxelForceVisualizer
This is a attached to prefab of a thin arrow that I put as a child of the voxels to allow force arrows to be drawn during debugging the drag force.
using UnityEngine;
public class VoxelForceVisualizer : MonoBehaviour {
const float TINY_NUMBER = 0.00000001f;
public Voxel voxel;
public float drag;
public float multiplier;
void Start () {
voxel = this.GetComponentInParent<Voxel> ();
}
// Update is called once per frame
void Update () {
Vector3 rescale;
if (voxel.LeadingEdge && voxel.Drag != Vector3.zero) {
this.transform.rotation = Quaternion.LookRotation (voxel.Drag);
rescale = new Vector3 (1f, 1f, voxel.DragMagnitude * multiplier);
}
else {
rescale = Vector3.zero;
}
this.transform.localScale = rescale;
drag = voxel.DragMagnitude;
}
}
VoxelVisualizer
this is attached to a small sphere object as a child of the voxel empty. It's just to see where the voxels are, and let the above scripts show/hide the voxels without disabling the drag force calculations.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class VoxelVisualizer : MonoBehaviour {
}
DragForce
This calculates the drag force
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public static class DragForce {
const float EPSILON = 0.000001f;
public static float Calculate (float coefficient, float density, float vsq, float A) {
float f = coefficient * density * vsq * A;
return f;
}
public static float Calculate (float vsq, float A) {
return Calculate (1f, 1f, vsq, A);
}
}