I am making a 2D game in Unity, and in this game, the camera will not need to move. As such, I would like to constrain the player's movement within the camera border, preferably with collision instead of just based on the player's transform. Honestly, I have no idea where to start doing something like this, but I assume it would involve some scripting. I am pretty comfortable with scripting at this point, but if your answer includes scripting, I would appreciate it if it would include thorough explanations of everything that is going on. By the by, I am using C#.
If the camera is in orthographic mode, you can use EdgeCollider2D to do this, finding the world positions of the corners of the screen using ScreenToWorldPoint to then determine the shape of the EdgeCollider2D.
The Unity Community UnityLibrary Github has an example (copied below):
// adds EdgeCollider2D colliders to screen edges
// only works with orthographic camera
using UnityEngine;
using System.Collections;
namespace UnityLibrary
{
public class ScreenEdgeColliders : MonoBehaviour
{
void Awake ()
{
AddCollider();
}
void AddCollider ()
{
if (Camera.main==null) {Debug.LogError("Camera.main not found, failed to create edge colliders"); return;}
var cam = Camera.main;
if (!cam.orthographic) {Debug.LogError("Camera.main is not Orthographic, failed to create edge colliders"); return;}
var bottomLeft = (Vector2)cam.ScreenToWorldPoint(new Vector3(0, 0, cam.nearClipPlane));
var topLeft = (Vector2)cam.ScreenToWorldPoint(new Vector3(0, cam.pixelHeight, cam.nearClipPlane));
var topRight = (Vector2)cam.ScreenToWorldPoint(new Vector3(cam.pixelWidth, cam.pixelHeight, cam.nearClipPlane));
var bottomRight = (Vector2)cam.ScreenToWorldPoint(new Vector3(cam.pixelWidth, 0, cam.nearClipPlane));
// add or use existing EdgeCollider2D
var edge = GetComponent<EdgeCollider2D>()==null?gameObject.AddComponent<EdgeCollider2D>():GetComponent<EdgeCollider2D>();
var edgePoints = new [] {bottomLeft,topLeft,topRight,bottomRight, bottomLeft};
edge.points = edgePoints;
}
}
}
Related
I am trying to change this script that I found online https://pressstart.vip/tutorials/2018/09/25/58/spawning-obstacles.html to work with a orthographic camera because that is what my game uses. At the moment it only works with perspective cameras and I don't really know how this works cause I have not really touched the camera matrix. Here is the code for the script:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class deployAsteroids : MonoBehaviour {
public GameObject asteroidPrefab;
public float respawnTime = 1.0f;
private Vector2 screenBounds;
// Use this for initialization
void Start () {
screenBounds = Camera.main.ScreenToWorldPoint(new Vector3(Screen.width, Screen.height, Camera.main.transform.position.z));
StartCoroutine(asteroidWave());
}
private void spawnEnemy(){
GameObject a = Instantiate(asteroidPrefab) as GameObject;
a.transform.position = new Vector2(screenBounds.x * -2, Random.Range(-screenBounds.y, screenBounds.y));
}
IEnumerator asteroidWave(){
while(true){
yield return new WaitForSeconds(respawnTime);
spawnEnemy();
}
}
}
My goal is to change the script to make it work correctly with a orthographic camera. (The indentation is messed up and that is not the problem).
For some reason the code is using Camera.main.transform.position.z for the camera depth component. This value will be negative in a typical camera setup in a 2d game.
So it's finding the left side of the screen by following the right edge of the frustum behind the camera. Very odd. You can't follow the right side of the frustum to find where the left side of the screen if it's orthographic so it is no surprise that it doesn't work.
Instead, just use the left side of the frustum and make the depth positive by negating that component:
screenBounds = Camera.main.ViewportToWorldPoint(
new Vector3(0f, 0f, -Camera.main.transform.position.z));
I want an UI canvas to follow the camera so it will be in front of the head always and also interactable like VR menu. I'm using the following code to do so.
public class FollowMe : MonoBehaviour
{
public GameObject menuCanvas;
public Camera FirstPersonCamera;
[Range(0, 1)]
public float smoothFactor = 0.5f;
// how far to stay away fromt he center
public float offsetRadius = 0.3f;
public float distanceToHead = 4;
public void Update()
{
// make the UI always face towards the camera
menuCanvas.transform.rotation = FirstPersonCamera.transform.rotation;
var cameraCenter = FirstPersonCamera.transform.position + FirstPersonCamera.transform.forward * distanceToHead;
var currentPos = menuCanvas.transform.position;
// in which direction from the center?
var direction = currentPos - cameraCenter;
// target is in the same direction but offsetRadius
// from the center
var targetPosition = cameraCenter + direction.normalized * offsetRadius;
// finally interpolate towards this position
menuCanvas.transform.position = Vector3.Lerp(currentPos, targetPosition, smoothFactor);
}
}
Unfortunately, the canvas is flickering in front fo the camera and it is not properly positioned. How do I make the menu to follow the camera?|
If there is no reason against it you can use a ScreenSpace - Camera canvas as stated in the docs. Then you can reference your FPS camera as the rendering camera for the canvas.
Easy way to do this is using Screen Space - Camera mode which you can setup from Canvas component and in Render Mode properties.
Second way if you want more control over how your canvas should behave then you can use Canvas Render Mode - "World Space" and then using script you can handle canvas a some gameobject.
for my bachelor thesis im augmenting a physical paper map. Therefor as written in the title im using unity in combination with vuforia for imagetarget detection and further functions.
So far so good.
Now the problem:
Im using cube elements that are augmented beside the map as interaction elements to filter the projected content. Those cubes have box colliders on em. I attached the following "buttonController" script to my AR camera which should handle the Raycast hits on those cubes and trigger further functions.
using UnityEngine;
using UnityEngine.Events;
using UnityEngine.EventSystems;
public class ButtonController : MonoBehaviour
{
public AudioClip[] audioClips;
public AudioSource myAudioSource;
private string btnName;
private GameObject[] stations;
private void Start()
{
myAudioSource = GetComponent<AudioSource>();
}
private void Update()
{
if (Input.GetMouseButtonDown(0))
{
//Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
Vector3 tapPositionFar = new Vector3(Input.mousePosition.x, Input.mousePosition.y, Camera.main.farClipPlane);
Vector3 tapPositionNear = new Vector3(Input.mousePosition.x, Input.mousePosition.y, Camera.main.nearClipPlane);
Vector3 tapPosF = Camera.main.ScreenToWorldPoint(tapPositionFar);
Vector3 tapPosN = Camera.main.ScreenToWorldPoint(tapPositionNear);
int layerMask = LayerMask.GetMask("Button", "Pin");
RaycastHit hit;
if (Physics.Raycast(tapPosN, tapPosF - tapPosN, out hit, Mathf.Infinity, layerMask))
{
btnName = hit.transform.name;
Debug.Log("NAME OF HIT TARGET: " + hit.transform.name);
myAudioSource.clip = audioClips[0];
myAudioSource.Play();
switch (btnName)
{
case "buttonRoute1":
Debug.Log("In: CASE 1");
playAnimation(btnName);
stations = GameObject.FindGameObjectsWithTag("Route1");
Debug.Log(stations);
LineRenderer lineRenderer = GetComponent<LineRenderer>();
foreach (GameObject station in stations)
{
MeshRenderer mRenderer = station.GetComponent<MeshRenderer>();
if (mRenderer.enabled == true)
{
mRenderer.enabled = false;
}
else
{
mRenderer.enabled = true;
}
}
return;
case "buttonRoute2":
Debug.Log("In: CASE 2");
playAnimation(btnName);
return;
case "buttonRoute3":
Debug.Log("In: CASE 3");
playAnimation(btnName);
return;
case "buttonRoute4":
Debug.Log("In: CASE 4");
playAnimation(btnName);
return;
case "buttonRoute5":
Debug.Log("In: CASE 5");
playAnimation(btnName);
return;
case "buttonRoute6":
Debug.Log("In: CASE 6");
playAnimation(btnName);
return;
}
}
else
{
//Debug.DrawRay(ray.origin, ray.direction * 1000, Color.white);
Debug.Log("NOTHING HAPPENED WITH MOUSE");
}
}
}
void playAnimation(string btnName)
{
GameObject currentGameObject = GameObject.Find(btnName);
Animation anim = currentGameObject.GetComponent<Animation>();
anim.Play();
}
}
This cubes are set to special XZ coordinates in the unity scene and do not get moved programmatically during runtime in another script.
I also augment pins which are placed on the 0/0/0 of an imagetarget and get repositioned during runtime to new XZ coordinates calculated by their LatLong coordinates. Their Raycast hits are detected aswell by the above script.
When i run my application in the editor everything works perfectly fine. Each of the elements, the cubes and the pins, get hit by the raycast exactly like they should.
So far so good.
When i build the android version and install it to my xiaomi android phone the raycasts dont hit like they should. I dont get any hit at all when i touch the cubes on their original position beside the map. BUT i get hits in the blue marked area seen in the picture below.
My unity scene showing the cube buttons
It looks like the boxcolliders of the cubes on the side get all move onto the 0/0/0 position of my imagetarget during runtime although the models keep their original position.
The pins dont get any hits at all although they have active mesh colliders on them and get hit in the editor too.
Im extremely desperate now since ive already tried like each and every advice i found in thousands of threats.
I tried setting up a new AR camera element, move the script to another object, change hierarchy, change colliders, reset colliders, different raycast scripts from mouseclick to touch, use a different device, ... and so on.
I would appreciate it SO MUCH if anyone of you might have a hint what the problem could be.
If more information would be needed just let me know!
Thanks so far.
Greetings
EDIT:
#Philipp Lenssen and others
Like i already said i think the colliders move during the runtime. I debugged the colliders of buttonRoute 1 and buttonRoute 2 while hitting the blue marked zone from screenshot 1.
I get hits and the colliders position is completely different.
marked green and yellow Cubes are cubes to visualize the boxcolliders of Route 1 and Route 2 Button cube beside the map
They are not even at the same wrong position. One is above the 0 of Y axis and one underneath. Their X and Z coordinates are completely wierd. They should keep the position of the Green Route 2 and Yellow Route 1 button 3d elements shown beside the map!
I have NO idea why this happens....
EDIT:
Ive rebuild the entire project to a new clean one. I also updated my Android SDK to the latest version. Now the app works BUGFREE!
I've spent my whole day trying to figure out a problem I'm having with my school project, so I'm off to my last resort: stackoverflow!
My problem:
I'm trying to rotate a character relative to a Camera you can rotate around the character.
input: xbox controller
relevant information:
The camera rotates horizontally around the character, using the right joystick
The character movement happens with the left joystick
The character already moves relative to the Camera, that's working as expected. I'm trying to rotate the character, which happens in a seperate method.
When the left joystick is pulled downwards, the character should always be facing (and moving towards) the camera.
When the left joystick is pulled upwards, the character should always be facing the opposite of (and moving away from) the camera.
I'm leaving a lot of code out, just to keep it readable for you guys. If you need something, just ask and I'll provide.
What I have so far: https://imgur.com/TERUXV6
Why it's wrong: The character rotation is perfect. However, I'm cheating here. The camera rotates according to the world coordinates. As soon as I rotate the camera, this is obvious.
The following script is attached to the Character GameObject.
public class CharacterBehaviour : MonoBehaviour
{
public GameObject HumanoidModel;
[SerializeField]private Transform _mainCameraTransform;
private void Update()
{
ApplyMovement();
RotateCharacter();
}
private void ApplyMovement()
{
//get input movement vector
Vector3 inputMovement = new Vector3(_inputMoveCharacterXAxis, 0, _inputMoveCharacterZAxis);
//make sure camera forward is player movement forward
Vector3 mainCameraForwardXz = Vector3.Scale(_mainCameraTransform.forward, new Vector3(1, 0, 1)); //multiplied by (1, 0, 1) to remove Y component
Vector3 mainCameraRightXz = Vector3.Scale(_mainCameraTransform.right, new Vector3(1, 0, 1)); //multiplied by (1, 0, 1) to remove Y component
Vector3 movementInCameraForwardDirection = mainCameraForwardXz * inputMovement.z;
Vector3 movementInCameraRightDirection = mainCameraRightXz * inputMovement.x;
Vector3 movementForward = movementInCameraForwardDirection + movementInCameraRightDirection;
_velocity = movementForward * MaximumSpeed;
}
private void RotateCharacter()
{
Vector3 inputDirection = new Vector3(_inputMoveCharacterXAxis, 0, _inputMoveCharacterZAxis);
HumanoidModel.transform.LookAt(HumanoidModel.transform.position +
HumanoidModel.transform.forward + inputDirection);
}
The following script is attached to the Main Camera GameObject
public class CameraBehaviour : MonoBehaviour
{
[SerializeField] private Transform _characterTransform;
[SerializeField] private Transform _mainCameraTransform;
private void Update ()
{
RotateCamera();
}
// Rotate camera horizontally
private void RotateCamera()
{
_mainCameraTransform.RotateAround(_characterTransform.position, Vector3.up, _inputRotateCameraHorizontal);
}
}
The source of the problem is in the RotateCharacter() function. I know I need to get some calculations in there to make the character rotation relative to the camera rotation, I just can't figure out what that calculation is, and why.
Thanks in advance!
Thrindil
so heres what you need...
camDefault, a Vector3 for the cameras initial position behind the char.
camCur, a Vector3 for the cameras current position(to track where it is in orbit around the character)
you need to set camDefault in Awake() to the its current position at that time, IE camDefault = cam.transform.position);
then in a fixed update,
camCur= cam.transform.position;
then,
if(Input//your horizontal axis here//==0){
if(camCur!=camDefault){
//translate camera to cam default
cam.tranform.translate(camDefault);
cam.lookat(player.transform.forward);
}
}
keep in mind that some of this is pseudocode, just a general direction. but the unity methods are there. if properly implemented this script will allow you to rotate around your char with right stick, than the camera will slide back behind you when you let go.
I believe, for your sanity, it would be easier to make the camera a direct child of ybot, so it rotates with it and you dont need to do it maually the camera will always stay behind the player. but thats just my opinion. as it sits now, the camera, and the model are children of player, if you want the camera to turn with the player, just make it a child of that model.
in this case, you could store the initial rotation, and current rotation as above, and us your stick to look left and right and then snap back to forward when you let the stick go.
I need to have a game object point north AND I want to combine this with gyro.attitude input. I have tried, unsuccessfully, to do this in one step. That is, I couldn't make any gyro script, which I found on the net, work with the additional requirement of always pointing north. Trust me, I have tried every script I could find on the subject. I deduced that it's impossible and probably was stupid to think it could be done; at least not this way (i.e. all-in-one). I guess you could say I surmised that you can't do two things at once. Then I thought possibly I could get the same effect by breaking-up the duties. That is, a game object that always points north via the Y axis. Great, got that done like this:
_parentDummyRotationObject.transform.rotation = Quaternion.Slerp(_parentDummyRotationObject.transform.rotation, Quaternion.Euler(0, 360 - Input.compass.trueHeading, 0), Time.deltaTime * 5f);
And with the game object pointing north on the Y, I wanted to add the second game-object, a camera in this case, with rotation using gyro input on the X and Z axis. The reason I have to eliminate the Y axes on the camera is because I get double rotation. With two things rotating at once (i.e. camera and game-object), a 180 degree rotation yielded 360 in the scene. Remember I need the game object to always point north (IRL) based on the device compass. If my device is pointing towards the East, then my game-object would be rotated 90 degrees in the unity scene as it points (rotation) towards the north.
I have read a lot about gyro camera controllers and one thing I see mentioned a lot is you shouldn't try to do this (limit it) on just 1 or 2 axis, when using Quaternions it's impossible when you don't know what you're doing, which I clearly do not.
I have tried all 3 solutions from this solved question: Unity - Gyroscope - Rotation Around One Axis Only and each has failed to rotate my camera on 1 axis to satisfy my rotational needs. Figured I'd try getting 1 axis working before muddying the waters with the 2nd axis. BTW, my requirements are simply that the camera should only rotate on 1 axis (in any orientation) based on the X axis of my device. If I could solve for X, then I thought it'd be great to get Z gyro input to control the camera as well. So far I cannot get the camera controlled on just 1 axis (X). Anyway, here are my findings...
The first solution, which used Input.gyro.rotationRateUnbiased, was totally inaccurate. That is, if I rotated my device around a few times and then put my phone/device down on my desk, the camera would be in a different rotation/location each time. There was no consistency. Here's my code for the first attempt/solution:
<code>
private void Update()
{
Vector3 previousEulerAngles = transform.eulerAngles;
Vector3 gyroInput = Input.gyro.rotationRateUnbiased;
Vector3 targetEulerAngles = previousEulerAngles + gyroInput * Time.deltaTime * Mathf.Rad2Deg;
targetEulerAngles.y = 0.0f;
targetEulerAngles.z = 0.0f;
transform.eulerAngles = targetEulerAngles;
}
</code>
The second solution was very consistent in that I could rotate my device around and then put it down on the desk and the unity camera always ended up in the same location/rotation/state so-to-speak. The problem I had was the camera would rotate on the one axis (X in this case), but it did so when I rotated my device on either the y or x axis. Either type of rotation/movement of my phone caused the unity camera to move on the X. I don't understand why the y rotation of my phone caused the camera to rotate on X. Here is my code for solution #2:
private void Start()
{
Input.gyro.enabled = true;
startEulerAngles = transform.eulerAngles;
startGyroAttitudeToEuler = Input.gyro.attitude.eulerAngles;
}
private void Update()
{
Vector3 deltaEulerAngles = Input.gyro.attitude.eulerAngles - startGyroAttitudeToEuler;
deltaEulerAngles.y = 0.0f;
deltaEulerAngles.z = 0.0f;
transform.eulerAngles = startEulerAngles - deltaEulerAngles;
}
The 3rd solution: I wasn't sure how to complete this last solution, so it never really worked. With the 2 axis zeroed-out, the camera just flipped from facing left to right and back, or top to bottom and back; depending on which axis were commented out. If none of the axis were commented-out (like the original solution) the camera would gyro around on all axis. Here's my code for attempt #3:
private void Start()
{
_upVec = Vector3.zero;
Input.gyro.enabled = true;
startEulerAngles = transform.eulerAngles;
}
private void Update()
{
Vector3 gyroEuler = Input.gyro.attitude.eulerAngles;
phoneDummy.transform.eulerAngles = new Vector3(-1.0f * gyroEuler.x, -1.0f * gyroEuler.y, gyroEuler.z);
_upVec = phoneDummy.transform.InverseTransformDirection(-1f * Vector3.forward);
_upVec.z = 0;
// _upVec.x = 0;
_upVec.y = 0;
transform.LookAt(_upVec);
// transform.eulerAngles = _upVec;
}
Originally I thought it was my skills, but after spending a month on this I'm beginning to think that this is impossible to do. But that just can't be. I know it's a lot to absorb, but it's such a simple concept.
Any ideas?
EDIT: Thought I'd add my hierarchy:
CameraRotator (parent with script) -> MainCamera (child)
CompassRotator (parent) -> Compass (child with script which rotates parent)
I'd do this in the following way:
Camara with default 0, 0, 0 rotation
Screenshot
Object placed at the center of the default position of the camera.
Script for the Camera:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class NewBehaviourScript : MonoBehaviour
{
Camera m_MainCamera;
// Start is called before the first frame update
void Start()
{
// Disable the sleep timeout during gameplay.
// You can re-enable the timeout when menu screens are displayed as necessary.
Screen.sleepTimeout = SleepTimeout.NeverSleep;
// Enable the gyroscope.
if (SystemInfo.supportsGyroscope)
{
Input.gyro.enabled = true;
}
m_MainCamera = Camera.main;
m_MainCamera.enabled = true;
}
// Update is called once per frame
void Update()
{
if (m_MainCamera.enabled)
{
// First - Grab the Gyro's orientation.
Quaternion tAttitude = Input.gyro.attitude;
// The Device uses a 'left-hand' orientation, we need to transform it to 'right-hand'
Quaternion tGyro = new Quaternion(tAttitude.x, tAttitude.y, -tAttitude.z, -tAttitude.w);
// the gyro attitude is tilted towards the floor and upside-down reletive to what we want in unity.
// First Rotate the orientation up 90deg on the X Axis, then 180Deg on the Z to flip it right-side up.
Quaternion tRotation = Quaternion.Euler(-90f, 0, 0) * tGyro;
tRotation = Quaternion.Euler(0, 0, 180f) * tRotation;
// You can now apply this rotation to any unity camera!
m_MainCamera.transform.localRotation = tRotation;
}
}
}
With this script my Object always face SOUTH no matter what.
If you want the object to face NORTH you just have to turn the view 180ยบ on the Y axis as a last rotation:
Quaternion tRotation = Quaternion.Euler(-90f, 0, 0) * tGyro;
tRotation = Quaternion.Euler(0, 0, 180f) * tRotation;
//Face NORTH:
tRotation = Quaternion.Euler(0,180f, 0) * tRotation;
Hope this might help ;)