Teleport while Controlling GameObject in Unity3D - c#

I am a freshman design student and they've asked us to create a game on unity3D without much training on it so needless to say I don't know much except for the super basic stuff. I don't know anything about c# and I've been having an issue making a gameobject teleport. I've spent 6 hours searching for a solution online and the only conclusion I got to was that my object is probably having issues teleporting because of the way I am controlling it - something to do with the controller remembering the last position before the teleport and returning to it. I have no idea how to fix it though.
So this is what my scene looks like: I have a sphere as my character, I move it to this other object that has a collider as trigger which then teleports my sphere to a different point (black object) on the terrain. As soon as my object reaches there, it starts sliding back to the point where the teleport happened. I even tried edit > project settings > physics > auto sync transforms as many suggested that and it worked for them.
This is the code by which I control my player:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class MyPlayer : MonoBehaviour
{
public float speed = 1;
public float spacing = 1;
private Vector3 pos;
// Use this for initialization
void Awake()
{
pos = transform.position;
}
// Update is called once per frame
void Update()
{
if (Input.GetKeyDown(KeyCode.W))
pos.x += spacing;
if (Input.GetKeyDown(KeyCode.S))
pos.x -= spacing;
if (Input.GetKeyDown(KeyCode.D))
pos.z -= spacing;
if (Input.GetKeyDown(KeyCode.A))
pos.z += spacing;
transform.position = Vector3.MoveTowards(transform.position, pos, speed * Time.deltaTime);
}
}
and I also have a camera that follows the sphere using this code
using UnityEngine;
using System.Collections;
public class CompleteCameraController : MonoBehaviour {
public GameObject player; //Public variable to store a reference to the player game object
private Vector3 offset; //Private variable to store the offset distance between the player and camera
// Use this for initialization
void Start ()
{
//Calculate and store the offset value by getting the distance between the player's position and camera's position.
offset = transform.position - player.transform.position;
}
// LateUpdate is called after Update each frame
void LateUpdate ()
{
// Set the position of the camera's transform to be the same as the player's, but offset by the calculated offset distance.
transform.position = player.transform.position + offset;
}
}
and I have another code on the camera that makes me be able to look around using my mouse
using UnityEngine;
using System.Collections;
public class FlyCamera : MonoBehaviour
{
/*
Writen by Windexglow 11-13-10. Use it, edit it, steal it I don't care.
Converted to C# 27-02-13 - no credit wanted.
Simple flycam I made, since I couldn't find any others made public.
Made simple to use (drag and drop, done) for regular keyboard layout
wasd : basic movement
shift : Makes camera accelerate
space : Moves camera on X and Z axis only. So camera doesn't gain any height*/
float mainSpeed = 700.0f; //regular speed
float shiftAdd = 950.0f; //multiplied by how long shift is held. Basically running
float maxShift = 2000.0f; //Maximum speed when holdin gshift
float camSens = 0.25f; //How sensitive it with mouse
private Vector3 lastMouse = new Vector3(255, 255, 255); //kind of in the middle of the screen, rather than at the top (play)
private float totalRun = 1.0f;
void Update()
{
lastMouse = Input.mousePosition - lastMouse;
lastMouse = new Vector3(-lastMouse.y * camSens, lastMouse.x * camSens, 0);
lastMouse = new Vector3(transform.eulerAngles.x + lastMouse.x, transform.eulerAngles.y + lastMouse.y, 0);
transform.eulerAngles = lastMouse;
lastMouse = Input.mousePosition;
//Mouse camera angle done.
//Keyboard commands
float f = 0.0f;
Vector3 p = GetBaseInput();
if (Input.GetKey(KeyCode.LeftShift))
{
totalRun += Time.deltaTime;
p = p * totalRun * shiftAdd;
p.x = Mathf.Clamp(p.x, -maxShift, maxShift);
p.y = Mathf.Clamp(p.y, -maxShift, maxShift);
p.z = Mathf.Clamp(p.z, -maxShift, maxShift);
}
else
{
totalRun = Mathf.Clamp(totalRun * 0.5f, 1f, 1000f);
p = p * mainSpeed;
}
p = p * Time.deltaTime;
Vector3 newPosition = transform.position;
if (Input.GetKey(KeyCode.Space))
{ //If player wants to move on X and Z axis only
transform.Translate(p);
newPosition.x = transform.position.x;
newPosition.z = transform.position.z;
transform.position = newPosition;
}
else
{
transform.Translate(p);
}
}
private Vector3 GetBaseInput()
{ //returns the basic values, if it's 0 than it's not active.
Vector3 p_Velocity = new Vector3();
if (Input.GetKey(KeyCode.W))
{
p_Velocity += new Vector3(0, 0, 1);
}
if (Input.GetKey(KeyCode.S))
{
p_Velocity += new Vector3(0, 0, -1);
}
if (Input.GetKey(KeyCode.A))
{
p_Velocity += new Vector3(-1, 0, 0);
}
if (Input.GetKey(KeyCode.D))
{
p_Velocity += new Vector3(1, 0, 0);
}
return p_Velocity;
}
}
Please let me know if there's a specific part of my code that I need to edit to resolve this or alternatively if you have a different code that won't give me this issue, that would make my life so much easier. If I need to edit something or you're sharing a code, please respond with the complete (corrected) code because otherwise I will just be even more confused.
I know this is a super long post and I am sorry but I am really desperate. It's been really hard studying online and basically having to teach myself all of this. This is for a final project so I will really appreciate any help you can throw my way. Thank you for reading and thanks for any help in advance.
EDIT: The teleport code is executing fine because I do teleport to the chosen location, I just end up sliding back to the point which I teleported from.
This is the teleporting code I am using.
using UnityEngine;
using System.Collections;
public class Teleport : MonoBehaviour
{
public GameObject ui;
public GameObject objToTP;
public Transform tpLoc;
void Start()
{
ui.SetActive(false);
}
void OnTriggerStay(Collider other)
{
ui.SetActive(true);
if ((other.gameObject.tag == "Player") && Input.GetKeyDown(KeyCode.E))
{
objToTP.transform.position = tpLoc.transform.position;
}
}
void OnTriggerExit()
{
ui.SetActive(false);
}
}

Ok, so the main reason why your character is drifting back to original position is that the pos variable in the MyPlayer script stays the same after teleporting.
Remedy for that will be changing pos variable from Teleport script after objToTP.transform.position = tpLoc.transform.position;. Something like objToTP.gameobject.GetComponent<MyPlayer>().pos = tpLoc.transform.position;
But make sure that objToTP has component MyPlayer and pos in MyPlayer is public.
Once again: it's a simple way to resolve your problem. In a real project you should create more flexible architecture, but that's a different story.

I believe that you sliding back because you moving player with transform.position = Vector3.MoveTowards in Update().
And you moving it to coordinates that was got from your input

Related

Move object in Unity 2D

I have made this script to move my player with no physics involved:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class movement : MonoBehaviour
{
[SerializeField] private float speed;
private float horinzontal;
private float vertical;
private void Update()
{
vertical = Input.GetAxis("Vertical");
horinzontal = Input.GetAxis("Horizontal");
if (horinzontal > 0.01f)
transform.localScale = Vector3.one;
else if (horinzontal < -0.01f)
transform.localScale = new Vector3(-1, 1, 1);
}
}
But my player is not moving, he is just turning left and right?
Why does it happen and how can I fix this issue?
Right now, your code is just changing what direction the player faces since you're editing transform.localScale (how big the object is / what direction your object faces). You want to edit transform.position of the GameObject (the location of your object).
Assuming that this is a 2D project, for basic horizontal movement you can do the following:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Movement : MonoBehaviour
{
public float speed = 5f;
private void Update()
{
if (Input.GetAxis("Horizontal") > 0.01f)
{
transform.localScale = Vector3.one;
transform.position = new Vector2(transform.position.x + (speed * Time.deltaTime), transform.position.y);
}
else if (Input.GetAxis("Horizontal") < -0.01f)
{
transform.localScale = new Vector3(-1, 1, 1);
transform.position = new Vector2(transform.position.x - (speed * Time.deltaTime), transform.position.y);
}
}
}
This will be enough to get you started. If you want to be sure you understand how this works, try adding vertical movement in yourself, the code is very similar to what I've written here (even if you don't need vertical movement its good for practice).
This approach is by no means perfect, or even good. You will need to fine tune how the character moves to what you want in your game. I highly recommend you try some of the Unity tutorials to help familiarize yourself with C# and Unity.
I especially recommend the Unity microgame tutorials if longer projects seem too daunting. It will give you a crash course in the basics and you'll finish the courses with a collection of games you can modify yourself. Also note, that you can access official courses and lessons directly from Unity Hub.
You do not seem to have made any movement changes, only the size has changed. To understand the current problem, add the input vector to transform.position. I tried to apply this change in the summary of the code below, but there is a more principled way to use Rigidbody2D, and in this case, read the link below.
[SerializeField] private float speed = 1;
private float horinzontal;
private float vertical;
private void Update()
{
vertical = Input.GetAxis("Vertical");
horinzontal = Input.GetAxis("Horizontal");
var _sign = Mathf.Sign(horinzontal);
if (_sign == 0) return;
transform.localScale = new Vector3(_sign, 1);
transform.position += new Vector3(_sign*speed*Time.deltaTime, 0);
}
more information about Rigidbody2D:
https://docs.unity3d.com/Manual/class-Rigidbody2D.html

How to make a thrown object rotate automatically based on the direction its thrown in?

so what I am trying to do is pull a spear back(like a slingshot) and send it flying, all the while making it rotate while it's flying in the air to mimic a real thrown spear. So far, I have only been able to make it rotate after a few seconds to a certain position but I can already tell that this isn't really a permanent fix and will definitely lead to problems down the line. I have tried looking for better ways of rotating the object but I can't really find anybody else who had a problem like this and it's hard to understand the unity documentation regarding rotation as it's for 3D objects.
Here is a gif showing how it looks like:
https://gfycat.com/chiefglasshorsemouse
This is the code that I have attached to my spear object that's responsible for letting me pull it back and launch it with the mouse:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using UnityEngine.SceneManagement;
using UnityEngine.Scripting.APIUpdating;
public class spear : MonoBehaviour
{
Rigidbody2D rb;
Vector3 velocity;
private Vector3 _initialPosition;
[SerializeField] private float _launchPower = 500;
bool rotating = false;
public GameObject objectToRotate;
private float _timeSittingAround;
private bool _spearWasLaunched;
[SerializeField] private float _spearRotation = 360;
private void Awake()
{
_initialPosition = transform.position;
rb = GetComponent<Rigidbody2D>();
}
void Start()
{
rb = GetComponent<Rigidbody2D>();
}
//rotates the spear
IEnumerator rotateObject(GameObject gameObjectToMove, Quaternion newRot, float duration)
{
if(rotating)
{
yield break;
}
rotating = true;
Quaternion currentRot = gameObjectToMove.transform.rotation;
float counter = 0;
while(counter < duration)
{
counter += Time.deltaTime;
gameObjectToMove.transform.rotation = Quaternion.Lerp(currentRot, newRot, counter / duration);
yield return null;
}
rotating = false;
}
//reloads the scene if spear goes out of bounds or lays dormant for 2 seconds
void Update()
{
GetComponent<LineRenderer>().SetPosition(0, transform.position);
GetComponent<LineRenderer>().SetPosition(1, _initialPosition);
if (_spearWasLaunched &&
GetComponent<Rigidbody2D>().velocity.magnitude <= 0.1)
{
_timeSittingAround += Time.deltaTime;
}
if(transform.position.y > 30 ||
transform.position.y < -12.5 ||
transform.position.x > 40 ||
transform.position.x < -20 ||
_timeSittingAround > 2)
{
string currentSceneName = SceneManager.GetActiveScene().name;
SceneManager.LoadScene(currentSceneName);
}
}
//slingshot mechanic
private void OnMouseDrag()
{
Vector3 newPosition = Camera.main.ScreenToWorldPoint(Input.mousePosition);
transform.position = new Vector3(newPosition.x, newPosition.y);
}
private void OnMouseDown()
{
GetComponent<SpriteRenderer>().color = Color.red;
GetComponent<LineRenderer>().enabled = true;
}
//launches the spear when mouse is released as well as begins the rotating mechanic
private void OnMouseUp()
{
Vector2 directionToInitialPosition = _initialPosition - transform.position;
GetComponent<Rigidbody2D>().AddForce(directionToInitialPosition * _launchPower);
GetComponent<Rigidbody2D>().gravityScale = 1;
_spearWasLaunched = true;
GetComponent<LineRenderer>().enabled = false;
Quaternion rotation2 = Quaternion.Euler(new Vector3(0, 0, _spearRotation));
StartCoroutine(rotateObject(objectToRotate, rotation2, 3f));
}
}
There are a few ways to do it. You could try setting up rigidbodies and maybe joints and weighting different parts so that the tip of the spear naturally rotates the spear. More info on that here: https://answers.unity.com/questions/14899/realistic-rotation-of-flying-arrow.html
There's also another solution in that link for doing it with one line of code, making it rotate in the direction of movement.
In the top of your class, add this:
public Rigidbody2D mySpear;
Then in the inspector, drag the spear object into that slot to assign it. Then in your code, remove all your rotation code. In Update() add this:
mySpear.transform.right =
Vector2.Slerp(mySpear.transform.right, mySpear.rigidbody.velocity.normalized, Time.deltaTime)
A third way to do it is...
Figure out the rotation (on the z axis) of the object when it's pointing directly down. Put that down as a constant called downRotation or something.
Set a minimum velocity and maximum velocity that will be used by the rotation.
I recommend also adding an AnimationCurve variable.
Instead of having a special function for rotation, just put it in FixedUpdate. If the spear's velocity.x is not 0, then rotate.
The speed of the rotation should be based on the speed of the object. If the object's x velocity is very slow then it would probably fall (rotate) more quickly. If it's going very fast, the rotation would not change much.
Don't use this exact code, but it'd look something like this:
// This part figures out how fast rotation should be based on how fast the object is moving
var curve = speedCurve.Evaluate(1 - ((rigidBody.velocity.x - minVelocity) / (maxVelocity - minVelocity)));
// Rotates the spear
var rotate = rigidBody.rotation.eulerAngles;
var speed = curve * rotateSpeed * Time.fixedDeltaTime;
rotate.z = Mathf.MoveTowards(rotate.z, downRotation, speed);
transform.rotation = Quaternion.Euler(rotate);
Side note: You should use rigidbody rotation like I am here, and you should be assigning components to variables in the inspector instead of calling GetComponent every frame.

Unity: 3D movement/Collision detection failure (AddForce, MovePosition, transform.localPosition)

Problem:
If I get the movement to work correctly, then the collision meshes are not detected. If I get the collision meshes detected, then the movement doesn't work correctly.
Brief summary of project:
I have a 3D environment with non-moveable objects (with collider meshes) and a moveable gameobject (rigid body with x2 box colliders) that I am controlling using haptic devices (basically a 3D joystick) through a UDP connection in an C++ app that I have put together and is running while the Unity application runs. The communication between the haptic devices and unity is perfectly fine. I am using the position information passed from the haptic device as my variables for moving my gameobject. Again, The position data arrives to Unity just fine; the method for using the position data with appropriate conditions and functions within Unity is where I am currently stuck.
Things I've tried:
If I use transform.localPosition (hapticDevicePosition); then the movement is great, but it ignores the colliders and passes through everything. I read online and understand that transform.localPosition will basically move my object on top of other objects without regards to physics. I also read that I may be able to introduce a ray that is like 0.000001 in front of my object such that it prevents movement if the ray interacts with any other object. This might be a way to still be able to use transform.localPosition? I'm not sure and I've never used rays so it would be difficult for me to set that script up correctly.
I've tried AddForce. This behaves very oddly. It only gives me 2 force outputs instead of 3...i.e., I can only move in 2 of the 3 axis. I don't understand why its behaving this way. The colliders are detected, however.
I've tried rb.MovePosition (rb.position + posX + posY + posZ) and various combinations of *Time.timeDelay and *speed as well. This also doesn't work correctly. The colliders are detected, but the movement either doesn't work at all, or is not working correctly.
Conclusion:
I've played with my script for the last 4 hours and some (not all) of the things that I attempted are commented out so they are still visible (please see code attached below). I will be reading more online explanations and trying out different code and update here if I work out a solution. If anyone has some pointers or suggestions, in the meantime, I would greatly appreciate it.
Thanks!
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class FalconPegControl_2 : MonoBehaviour {
// Define needed variables
private TestUDPConnection udpListener;
public Vector3 realObjectCurrentPos;
private Vector3 realObjectLastPos;
public Vector3 realObjectCurrentRot;
private Vector3 realObjectLastRot;
public Vector3 realObjectPosChange;
public Vector3 realObjectRotChange;
private Quaternion rotation;
//public float pi = 3.14f;
private Rigidbody rb;
private int control = 0;
public bool collisionOccurred = false;
//public float thrust = 1000;
//public CalibrationManager calibrationManager;
// Use this for initialization
void Start () {
udpListener = GetComponentInParent<TestUDPConnection>();
collisionOccurred = false;
rb = GetComponent<Rigidbody> ();
SharedRefs.falconPegControl = this;
}
public void OffControl ()
{
control = 0;
}
public void CollisionDuplicateFix ()
{
collisionOccurred = true;
}
// Update is called once per frame
void FixedUpdate () {
//WITHOUT UNITY AXIS CONVERSION:
//realObjectCurrentPos[0] = udpListener.xPosReal; //[m]
//realObjectCurrentPos[1] = udpListener.yPosReal; //[m]
//realObjectCurrentPos[2] = udpListener.zPosReal; //[m]
//===============================
//Unity axis conversions:
//CHAI3D --> Unity
//(x, y, z) --> (x, -z, y)
//CHAI3D: realObjectCurrentPos[0], [1], [2] is CHIA3D (x, y, z)
//Also, to compensate for the workspace available to the Falcon Device (~0.04, ~0.06, ~0.06)
//adding a value of x10 allows it to reach the default hemisphere successfully
//updated comment: the sign values that work (-, +, -)
//===============================
//Unity conversion for rotation (using Falcon devices)
//Since one falcon is for translation and the other is for rotation,
//the rotation information is a conversion of translational information
//in other words, max range of (~0.04, ~0.06, ~0.06) has been converted into a max range of (90, 90, 90)
//using basic algebra (i.e., (90/0.04))
//thus giving the user the full range of 180 degrees (from 90 degrees to -90 degrees)
realObjectCurrentPos[0] = udpListener.xPosReal * (-5); //[m]
realObjectCurrentPos[1] = udpListener.zPosReal * (5); //[m]
realObjectCurrentPos[2] = udpListener.yPosReal * (-5); //[m]
realObjectCurrentRot [0] = udpListener.xRot * (90f / 0.04f); //degrees
realObjectCurrentRot [1] = udpListener.yRot * (90f / 0.06f); //degrees
realObjectCurrentRot [2] = udpListener.zRot * (90f / 0.06f); //degrees
if (Input.GetKeyDown ("1")) {
control = 1;
SharedRefs.stopWatch.startTimer ();
}
if (Input.GetKeyDown ("space"))
{
OffControl ();
}
if (control==1)
{
Vector3 posUnity = new Vector3 (realObjectCurrentPos[0], realObjectCurrentPos[1], realObjectCurrentPos[2]);
rb.AddForce (posUnity);
//Vector3 tempVect = new Vector3(realObjectCurrentPos[0], realObjectCurrentPos[1], realObjectCurrentPos[2]);
//Vector3 startPoint = new Vector3 (0f, 0.0225f, 0f);
//tempVect = tempVect * speed * Time.deltaTime;
//transform.localPosition = realObjectCurrentPos; //[m]
//var unityX = Vector3.Scale (posTemp, Vector3.right);
//var unityY = Vector3.Scale (posTemp, Vector3.up);
//var unityZ = Vector3.Scale (posTemp, Vector3.forward);
//Vector3 unityX = new Vector3 (Vector3.Scale (posTemp, Vector3.right), Vector3.Scale (posTemp, Vector3.up), Vector3.Scale (posTemp, Vector3.forward));
//Vector3 unityY = new Vector3 (Vector3.Scale (posTemp, Vector3.up));
//Vector3 unityZ = new Vector3 (Vector3.Scale (posTemp, Vector3.forward));
//rb.MovePosition (rb.position + unityX + unityY + unityZ);
//transform.localPosition = (startPoint + tempVect); //[m]
transform.localRotation = Quaternion.Euler(realObjectCurrentRot); //[m]
realObjectLastPos = realObjectCurrentPos;//[m]
realObjectLastRot = realObjectCurrentRot;//[m]
realObjectPosChange = realObjectCurrentPos - realObjectLastPos; //[m]
realObjectRotChange = realObjectCurrentRot - realObjectLastRot;
}
else if (control==0)
{
Vector3 stop = new Vector3 (0, 0, 0);
rb.constraints = RigidbodyConstraints.FreezePositionZ | RigidbodyConstraints.FreezeRotationZ;
rb.constraints = RigidbodyConstraints.FreezePositionX | RigidbodyConstraints.FreezeRotationX;
rb.constraints = RigidbodyConstraints.FreezePositionX | RigidbodyConstraints.FreezeRotationX;
rb.velocity = (stop);
}
}
}
Also, updated from #Ali Baba's comments:
I haven't had time to test the other methods yet, but by using AddForce and playing with the drag and a force modifier variable, I was able to get control over all three axes (actually 6DOF because I also have rotational control from a 2nd external device) and I also have much better control over my gameobject than before (specifically due to the drag and force modifier variable adjustments). This may be the best solution, but I originally needed to get my position to change based on the position of the external devices that I am using. I'm adding a basic, slimmed down, adjusted code which uses AddForce and allows for keycontrol adjustments of the drag and my force modifier variable in case other beginners see this thread also. In the meantime, I will try to get the other functions (MovePosition, etc) working and update on the results.
Slim, basic drag/variable testing code:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Real_Controller : MonoBehaviour {
// Define needed variables
private TestUDPConnection udpListener;
public Vector3 realObjectCurrentPos;
public Vector3 realObjectCurrentRot;
private Quaternion rotation;
private Rigidbody rb;
private float increaseForce = 23;
// Use this for initialization
void Start () {
udpListener = GetComponentInParent<TestUDPConnection>();
rb = GetComponent<Rigidbody> ();
rb.drag = 1.24f;
}
// Update is called once per frame
void FixedUpdate () {
if (Input.GetKeyDown ("q"))
{
rb.drag -= 0.1f;
Debug.Log ("drag is: " + rb.drag);
}
if (Input.GetKeyDown ("w"))
{
rb.drag += 0.1f;
Debug.Log ("drag is: " + rb.drag);
}
if (Input.GetKeyDown ("a")) {
increaseForce -= 1f;
Debug.Log ("increased force is: " + increaseForce);
}
if (Input.GetKeyDown ("s")) {
increaseForce += 1f;
Debug.Log ("increase force is: " + increaseForce);
}
realObjectCurrentPos[0] = udpListener.xPosReal * (-increaseForce); //[m]
realObjectCurrentPos[1] = udpListener.zPosReal * (increaseForce); //[m]
realObjectCurrentPos[2] = udpListener.yPosReal * (-increaseForce); //[m]
Vector3 forceDirection = realObjectCurrentPos - transform.localPosition;
rb.AddForce (forceDirection * forceDirection.magnitude);
realObjectCurrentRot [0] = udpListener.xRot * (90f / 0.04f); //degrees
realObjectCurrentRot [1] = udpListener.yRot * (90f / 0.06f); //degrees
realObjectCurrentRot [2] = udpListener.zRot * (90f / 0.06f); //degrees
transform.localRotation = Quaternion.Euler(realObjectCurrentRot); //[m]
}
}
Instead of placing the gameObject at the exact position of your controller, you could try applying a force in the direction of the position you want your gameObject to be in:
if (control==1)
{
Vector3 forceDirection = realObjectCurrentPos - transform.localPosition;
rb.AddForce (forceDirection);
transform.localRotation = Quaternion.Euler(realObjectCurrentRot)
}
The force applied here is linear to the distance between the position of the gameObject and the real object, so this basically behaves like a spring. You should try multiplying the force by different factors and test:
rb.AddForce (forceDirection * 0.5f);
Or scale it quadratically:
rb.AddForce (forceDirection * forceDirection.magnitude);
Whatever feels best

Shooting a projectile but goes in wrong direction - 2D game

I am trying to make a simple game where I am shooting a projectile when the user touches the screen, I first spawn 8 projectiles (sprites) and when the user touches the screen I would like the top projectile to fly in the touch direction. I was able to do this; However, every time I shoot, the projectile goes in the wrong direction, here is an image which will illustrate the issue.
Obviously the image is still here but the object will continue flying until it goes out of the screen and then gets destroyed.
Here is the code snippet that handles this
GameplayController.cs
if (Input.GetMouseButtonDown(0))
{
Vector3 position = new Vector3(Input.mousePosition.x, Input.mousePosition.y, 0);
position = Camera.main.ScreenToWorldPoint(position);
GameObject target;
target = new GameObject();
target.transform.position = position;
Arrow comp = currentArrows[0].GetComponent<Arrow>();
comp.setTarget(target.transform);
comp.GetComponent<Arrow>().arrowSpeed = 12;
comp.GetComponent<Arrow>().shoot = true;
currentArrows.RemoveAt(0);
Destroy(target);
}
I know I am getting the mouse input here and not the phone touch and that's fine for me, later I will convert it to the touch input.
Arrow.cs
public bool shoot = false;
public float arrowSpeed = 0.0f;
public Vector3 myDir;
public float speed = 30.0f;
private Transform target;
// Use this for initialization
void Start () {
}
// Update is called once per frame
void Update () {
if(shoot)
{
transform.position += transform.right * arrowSpeed * Time.deltaTime;
}
}
public void setTarget(Transform targetTransform)
{
this.target = targetTransform;
Vector3 vectorToTarget = target.position - transform.position;
float angle = Mathf.Atan2(vectorToTarget.y, vectorToTarget.x) * Mathf.Rad2Deg;
Quaternion q = Quaternion.AngleAxis(angle, Vector3.forward);
transform.rotation = Quaternion.Slerp(transform.rotation, q, Time.deltaTime * speed);
}
private void OnBecameInvisible()
{
print("Disappeared");
Destroy(gameObject);
Gameplay.instance.isShooting = false;
}
Vector3 position = new Vector3(Input.mousePosition.x, Input.mousePosition.y, 0);
I think that your problem is that you're getting the screen coordinates by click, not the world coordinates, which is actually two different things. In order to direct your projectile correctly you need to convert your screen coordinates to world like it's done here.
The next thing is how you move the projectile:
transform.position += transform.right * arrowSpeed * Time.deltaTime;
You're moving the projectile to the right and then rotating it somehow. Maybe you should try to move it with Vector3.Lerp, which will be easier and rotate it with Transform.LookAt.

Collision detection script fails sometimes

For the most part this script works, however every now and then an enemy will fail at pathfinding and go through a building or wall. Is there a way i can stop this?
using UnityEngine;
using System.Collections;
namespace Daniel {
public class EnemyAI : Living {
// Detection private int range = 10; private float speed = 10f; private bool isThereAnyThing = false;
// Waypoints/Targets
public GameObject[] targets;
private float rotationSpeed = 900f;
private RaycastHit hit;
GameObject target;
[SerializeField]
private int randomTarget = 0;
[SerializeField]
float timeToNextCheck = 3;
public float effectTimer = 2f;
public GameObject deathEffect;
public LayerMask detectThis;
void Start()
{
randomTarget = Random.Range(0, 8);
target = targets[randomTarget];
}
void FixedUpdate()
{
timeToNextCheck = timeToNextCheck - Time.deltaTime;
ScanForNewWaypoint();
LookAtTarget();
Move();
CheckForObsticales();
}
void LookAtTarget()
{
//Look At Somthly Towards the Target if there is nothing in front.
if (!isThereAnyThing)
{
Vector3 relativePos = target.transform.position - transform.position;
Quaternion rotation = Quaternion.LookRotation(relativePos);
transform.rotation = Quaternion.Slerp(transform.rotation, rotation, Time.deltaTime);
}
}
void Move()
{
// Enemy translate in forward direction.
transform.Translate(Vector3.forward * Time.deltaTime * speed);
}
public void CheckForObsticales()
{
//Checking for any Obstacle in front.
// Two rays left and right to the object to detect the obstacle.
Transform leftRay = transform;
Transform rightRay = transform;
//Use Phyics.RayCast to detect the obstacle
if (Physics.Raycast(leftRay.position + (transform.right * 7f), transform.forward, out hit, range, detectThis) || Physics.Raycast(rightRay.position - (transform.right * 7f), transform.forward, out hit, range))
{
if (hit.collider.gameObject.CompareTag("Obstacles"))
{
isThereAnyThing = true;
transform.Rotate(Vector3.up * Time.deltaTime * rotationSpeed);
}
}
// Now Two More RayCast At The End of Object to detect that object has already pass the obsatacle.
// Just making this boolean variable false it means there is nothing in front of object.
if (Physics.Raycast(transform.position - (transform.forward * 4), transform.right, out hit, 10, detectThis) ||
Physics.Raycast(transform.position - (transform.forward * 4), -transform.right, out hit, 10, detectThis))
{
if (hit.collider.gameObject.CompareTag("Obstacles"))
{
isThereAnyThing = false;
}
}
}
public void ScanForNewWaypoint()
{
CheckForObsticales();
if (timeToNextCheck <= 0)
{
timeToNextCheck = Random.Range(6, 3);
randomTarget = Random.Range(0, 8);
target = targets[randomTarget];
}
}
public override void TakeHit(float dmg, Vector3 hitPoint, Vector3 hitDirection)
{
if (dmg >= health)
{
Destroy(Instantiate(deathEffect, hitPoint, Quaternion.FromToRotation(Vector3.forward, hitDirection)) as GameObject, effectTimer);
Debug.Log("Exploded");
}
base.TakeHit(dmg, hitPoint, hitDirection);
}
}
}
Okay now this is one of the most common problems people face when playing with Physics Engine and believe me there are lot of factors that adds up to the result/consequences you are facing. Most common solutions devised by people is to use Update which is Executed Every Frame for Physics calculation which is Entirely Wrong. All the physics related calculation needs to be performed inside the Fixed Update otherwise it will be too expensive to execute physics based simulations in the Update which means every single frame. In your case where your Enemy is going through walls and Physics Collisions are missing the best solution you can take is to tweak some Physics Related Properties in Unity these are located in Edit>>>Project Settings>>>Time change the Time Step and Maximum Allowed Time Step in the inspector to set your collisions right. There are some other things to keep in mind too while dealing with physics like scale, mass and colliders etc. Here is a Link concerning the detailed mechanics of the physics settings in unity. Make sure you read and understand the contents of the link before working and tweaking physics settings.

Categories

Resources