Using Unity3d 2018.2.1 and C#
Problem: StartTimer not starting when user clicks screen, touch not being registered so timer never starts.
In FixedUpdate() im looking at when the user touches the screen to start a timer.
If the user hasn't touched the screen again and it has been 15 seconds then do something. If the user touches the screen before the 15 seconds, then restart the timer.
public float uiStartTime = 0f;
public float uiCurrentTime = 0f;
public float uiMaxTime = 15f;
private void FixedUpdate()
{
if (Input.touchCount > 0)
{
Touch touch = Input.GetTouch(0);
if (touch.phase == TouchPhase.Ended)
{
uiStartTime = Time.time;
}
}
uiCurrentTime = Time.time;
if (uiStartTime - uiCurrentTime == uiMaxTime)
{
// Reset Timer
uiStartTime = 0f;
uiCurrentTime = 0f;
// Do Something
}
}
Problem: StartTimer not starting when user clicks screen, touch not
being registered so timer never starts.Im using a mouse to test but
will deploy for mobile
That's the main issue here. Of-course you still have some logical error in your code but timer shouldn't start since you're using mouse to test the Input.touchCount and Input.GetTouch API. These two API only work on the mobile device since they use touch instead of mouse.
If you want to use them in the Editor, use Unity Remote 5. Download it, on your mobile device, enable it in the Editor by going to Edit->Project Settings->Editor then connect your device to your computer. You should be able to use the touch API in your Editor without having to build it. See this post for more information about Unity Remote 5.
If you want the code to be more compatible with both mouse and touch then make a simple function that wraps around the touch and mouse API. You can then use the combination of UNITY_STANDALONE and UNITY_EDITOR preprocessor directives to detect when you are not running on a mobile platform.
Here is a simple mobile/desktop touch wrapper that works without Unity Remote 5:
bool ScreenTouched(out TouchPhase touchPhase)
{
#if UNITY_STANDALONE || UNITY_EDITOR
//DESKTOP COMPUTERS
if (Input.GetMouseButtonDown(0))
{
touchPhase = TouchPhase.Began;
return true;
}
if (Input.GetMouseButtonUp(0))
{
touchPhase = TouchPhase.Ended;
return true;
}
touchPhase = TouchPhase.Canceled;
return false;
#else
//MOBILE DEVICES
if (Input.touchCount > 0)
{
touchPhase = Input.GetTouch(0).phase;
return true;
}
touchPhase = TouchPhase.Canceled;
return false;
#endif
}
Below is how to use it to get what you want in your question:
public float uiStartTime = 0;
public float uiMaxTime = 15f;
private bool timerRunning = false;
private void Update()
{
//Increment timer if it's running
if (timerRunning)
uiStartTime += Time.deltaTime;
TouchPhase touchPhase;
if (ScreenTouched(out touchPhase))
{
//Check for keypres and start timer if it's not running
if (touchPhase == TouchPhase.Ended && !timerRunning)
{
timerRunning = true;
}
//If the user touches the screen before the 15 seconds, then restart the timer.
else if (touchPhase == TouchPhase.Ended && timerRunning && uiStartTime < uiMaxTime)
{
uiStartTime = 0f;
Debug.Log("Timer Reset");
}
}
//If the user hasn't touched the screen again and it has been 15 seconds then do something.
if (uiStartTime >= uiMaxTime)
{
// Do Something
Debug.Log("Timer not touched for 15 seconds");
}
}
Your equality will essentially never be true when using floats. uiStartime - uiCurrentTime is also giving you a negative number.
Instead do:
if (uiCurrentTime - uiStartTime >= uiMaxTime)
Related
Overview
Using Unity2D 2019.3.5, I am making a platformer game using C#. I implemented raycast to detect when my player is touching the ground and attempted to make it only so the player can jump only once.
Problem
Although I thought I programmed my character to jump once, after the first jump, the Unity engine still shows a checkmark to my "isGrounded" variable and only turns to false (unchecked) after a second jump before hitting the ground.
My Code
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class Player_Controller : MonoBehaviour
{
public int playerSpeed = 10;
public int playerJumpPower = 1250;
private float moveX;
public bool isGrounded;
public float distanceToBottomOfPlayer = .7f;
// Update is called once per frame
void Update()
{
PlayerMove();
PlayerRaycast();
}
void PlayerMove()
{
// CONTROLS
moveX = Input.GetAxis("Horizontal");
if (Input.GetButtonDown("Jump") && isGrounded == true)
{
Jump();
}
// ANIMATIONS
// PLAYER DIRECTION
if (moveX < 0.0f)
{
GetComponent<SpriteRenderer>().flipX = true;
}
else if (moveX > 0.0f)
{
GetComponent<SpriteRenderer>().flipX = false;
}
// PHYSICS
gameObject.GetComponent<Rigidbody2D>().velocity = new Vector2(moveX * playerSpeed,
gameObject.GetComponent<Rigidbody2D>().velocity.y);
}
void Jump()
{
GetComponent<Rigidbody2D>().AddForce(Vector2.up * playerJumpPower);
isGrounded = false;
}
void PlayerRaycast()
{
// Ray Down
RaycastHit2D rayDown = Physics2D.Raycast(transform.position, Vector2.down);
if (rayDown.collider != null && rayDown.distance < distanceToBottomOfPlayer &&
rayDown.collider.tag == "ground")
{
isGrounded = true;
}
}
}
Extra Info
I did have to change a Unity setting in Edit > Project Settings > Physics 2D > Queries Start In Colliders. I had to turn this setting off (uncheck) in order to get my player to jump using the code I wrote above. I know there are other ways of making my player jump, however, this seemed to be the most efficient while maintaining the readability of the code.
Solutions Tried
What I believe the problem is that I have a raycast issue that I don't know how to fix. I looked at other Stack Overflow posts including the ones recommended after writing this post, but none of them applied to my problem.
Final Notes
As I said before, I know there are other ways to make my player jump only once using different code, however, I would like to stick with this code for my own learning purposes and for future reference.
Because you can't be sure that isGrounded is false when you call Jump().
I think issue lies there.
Try not setting isGrounded flag when calling Jump(). Setting isGrounded is purely PlayerRaycast()'s job.
void Update()
{
// Raycast before moving
PlayerRaycast();
PlayerMove();
}
void PlayerRaycast()
{
// Ray Down
RaycastHit2D rayDown = Physics2D.Raycast(transform.position, Vector2.down);
if (rayDown.collider != null && rayDown.collider.tag == "ground")
{
if( rayDown.distance < distanceToBottomOfPlayer )
{
isGrounded = true;
}
else
{
isGrounded = false;
}
}
}
void Jump()
{
GetComponent<Rigidbody2D>().AddForce(Vector2.up * playerJumpPower);
//isGrounded = false;
}
The first problem is that you add the force and check for the ground at the same frame.
Applied Force is calculated in FixedUpdate or by explicitly calling
the Physics.Simulate method.
So after you press the "Jump" button, the object is still on the ground until the next frame comes.
To fix this, you can simply exchange the order of "move" and "raycast"
void Update()
{
PlayerRaycast();
PlayerMove();
}
The second problem is if the jump power is not large enough, the object can still be close to the ground in the next frame, you should avoid checking the landing when the jump is in ascending state.
void PlayerRaycast()
{
if(GetComponent<Rigidbody2D>().velocity.y > 0)
return;
...
}
I am trying to give force to a rigidbody object by touch and dragging away from it and then releasing the touch. But the Touchphase.End just doesn't run. I can't find the answer to this problem. I get input from one touch until the touch releases, on releasing the touch the distance between the starting and final position is calculated and a similar force is applied on the rigidbody to make it move. The object on which the code is attached is the same object which needs to move.
// Update is called once per frame
void Update()
{
//Update the Text on the screen depending on current TouchPhase, and the current direction vector
// Track a single touch as a direction control.
if (Input.GetMouseButton(0))
{
Touch touch = Input.GetTouch(0);
_touchPosWorld =
Camera.main.ScreenToWorldPoint(touch.position); //get the position where the screen was touched
RaycastHit2D hitInformation = Physics2D.Raycast(_touchPosWorld, Vector2.zero);
FirstTouch = (hitInformation.collider.CompareTag("ball"));
if (FirstTouch || IsInTouch)
{
// Handle finger movements based on TouchPhase
switch (touch.phase)
{
//When a touch has first been detected, change the message and record the starting position
case TouchPhase.Began:
// Record initial touch position.
IsInTouch = true;
_startPos = touch.position;
GameManager.GetInstance().ChangeAccordinglyText.text = "clicked Inside";//test text
//Movement started
break;
//Determine if the touch is a moving touch
case TouchPhase.Moved:
// Determine direction by comparing the current touch position with the initial one
_direction = touch.position - _startPos;
GameManager.GetInstance().ChangeAccordinglyText.text = "MovingTouch to " + touch.position; //test text
//moving
break;
case TouchPhase.Ended:
// Report that the touch has ended when it ends
//end of movement of touch
GameManager.GetInstance().EndedText.text = "EndOfTouch";
float force = 0;
force = _direction.x > _direction.y ? _direction.x : _direction.y;
_rigidbody.AddForce(_direction * force);
FirstTouch = false;
IsInTouch = false;
break;
}
}
}
}
Your TouchPhase.End will never be reached since in the moment you release the touch also GetMouseButton(0) will be false and thus the entire block skipped!
To avoid the errors you are talking about before trying to access a certain touch, first check if there is any touch to access using Input.touchCount
if(Input.touchCount > 0)
{
var touch = Input.GetTouch(0);
// ...
}
In general for development you should rather check for e.g. Input.touchSupported and implement an alternative mouse system for simulating the touches
if(Input.touchSupported)
{
/* Implement touches */
if(Input.touchCount > 0)
{
var touch = Input.GetTouch(0);
// ...
}
}
// Optional
else
{
/* alternative mouse implementation for development */
if(Input.GetMouseButtonDown(0))
{
// simulates touch begin
}
else if(Input.GetMouseButton(0))
{
// simulates touch moved
}
// I've seen in strange occasions that down and up might get called within one frame
if(Input.GetMouseButtonUp(0))
{
// simulates touch end
}
}
I'm working on a simple android game in Unity 2D, I have to make our player jump by adding force or changing the velocity of its Rigidbody2D (I know how). The problem is I don't know how to tell unity that add force on single screen tap (I'm not familiar with touch inputs) so any help would be appreciated. Please keep it clean as I'm just a beginner.
Here's the script I made so far.
{
public Rigidbody2D _playerRB;
public bool _canMove;
public float _speed; //keep this above cam speed (relative)
public float _jumpForce;
void Start()
{
_playerRB = GetComponent<Rigidbody2D>();
}
void Update()
{
_playerRB.velocity = new Vector2(_speed, _playerRB.velocity.y);
if (Input.touchCount > 0)
{
_playerRB.velocity = new Vector2(_playerRB.velocity.x, _jumpForce);
}
}
}
You can achieve this by handling the touch input in your Update function:
void Update()
{
_playerRB.velocity = new Vector2(_speed, _playerRB.velocity.y);
if (Input.touchCount > 0)
{
Touch touch = Input.GetTouch(0);
if (touch.phase == TouchPhase.Began)
{
_playerRB.velocity = new Vector2(_playerRB.velocity.x, _jumpForce);
}
}
}
Also there are different TouchPhases such as TouchPhase.Moved which indicates that the touch input has moved and TouchPhase.Ended which indicates that the finger has stopped touching the screen. Using these in your Update you can achieve a lot with touch controls.
I am developing a game where objects fall from the top of the screen, and you try to and tap on them before they reach the bottom of the screen. The code works fine when I played the game in the editor (with the touch controls swapped to mouse controls), except when I run the game on a phone, the game only seems to register a successful hit if you tap slightly in front of the object in the direction that it is traveling, and does not register a hit if you tap towards the back end or center of the object. I have built and ran the game over 10 times now, each time trying to fix this issue but nothing seems to help. My theory at the moment is that my code for the touch controls have too much going on and/ or have redundancies and by the time it checks whether or not an object is at the position of the touch, the object has moved to a different location. Any thoughts on why the hit boxes are off, and is there a better way to do hit detection with touch screen?
void FixedUpdate()
{
if (IsTouch())
{
CheckTouch(GetTouchPosition());
}
}
// Returns true if the screen is touched
public static bool IsTouch()
{
if (Input.touchCount > 0)
{
if (Input.GetTouch(0).phase == TouchPhase.Began)
{
return true;
}
}
return false;
}
// Gets the position the touch
private Vector2 GetTouchPosition()
{
Vector2 touchPos = new Vector2(0f, 0f);
Touch touch = Input.GetTouch(0);
if (Input.GetTouch(0).phase == TouchPhase.Began)
{
touchPos = touch.position;
}
return touchPos;
}
// Checks the position of the touch and destroys the ball if the
ball is touched
private void CheckTouch(Vector2 touchPos)
{
RaycastHit2D hit =
Physics2D.Raycast(Camera.main.ScreenToWorldPoint(
(Input.GetTouch(0).position)), Vector2.zero);
if (hit.collider != null)
{
destroy(hit.collider.gameObject);
}
}
Setting: Creating my first multiplayer game and running into an odd issue.
it's a tank game where players can shoot bullets and kill each other.
You can charge the bullets to shoot it faster/further away.
Problem:
When the client player charges fully and releases, the bullet continue to be spawned repeatedly and never stops. This issue doesn't occur if the client player does Not charges fully.
I believe the issue is the update function within the if (m_CurrentLaunchForce >= m_MaxLaunchForce && !m_Fired)
Note:
The host player does not have this problem, therefore it's somehow related to networking.
private void Update()
{
if (!isLocalPlayer)
return;
// Track the current state of the fire button and make decisions based on the current launch force.
m_AimSlider.value = m_MinLaunchForce;
if (m_CurrentLaunchForce >= m_MaxLaunchForce && !m_Fired) {
m_CurrentLaunchForce = m_MaxLaunchForce;
CmdFire ();
} else if (Input.GetButtonDown (m_FireButton) && !m_Fired) {
m_Fired = false;
m_CurrentLaunchForce = m_MinLaunchForce;
m_ShootingAudio.clip = m_ChargingClip;
m_ShootingAudio.Play();
} else if (Input.GetButton (m_FireButton)) {
m_CurrentLaunchForce += m_ChargeSpeed * Time.deltaTime;
m_AimSlider.value = m_CurrentLaunchForce;
} else if (Input.GetButtonUp(m_FireButton)) {
CmdFire ();
}
}
[Command]
private void CmdFire()
{
// Set the fired flag so only Fire is only called once.
m_Fired = true;
// Create an instance of the shell and store a reference to it's rigidbody.
GameObject shellInstance = (GameObject)
Instantiate (m_Shell, m_FireTransform.position, m_FireTransform.rotation) ;
// Set the shell's velocity to the launch force in the fire position's forward direction.
shellInstance.GetComponent<Rigidbody>().velocity = m_CurrentLaunchForce * m_FireTransform.forward;
// Change the clip to the firing clip and play it.
m_ShootingAudio.clip = m_FireClip;
m_ShootingAudio.Play ();
NetworkServer.Spawn (shellInstance);
// Reset the launch force. This is a precaution in case of missing button events.
m_CurrentLaunchForce = m_MinLaunchForce;
}
If you look trough the documentation you will find this -> [Command] functions are invoked on the player object associated with a connection. This is setup in response to the "ready" message, by passing the player objec to the NetworkServer.PlayerIsReady() function. The arguments to the command call are seriialized across the network, so that the server function is invoked with the same values as the function on the client.
I think the reason why it wasnt working for you is because you have to pass an argument to the function like
[Command]
private void CmdFire(bool m_Fired)
{
// Set the fired flag so only Fire is only called once.
m_Fired = true;
// Create an instance of the shell and store a reference to it's rigidbody.
GameObject shellInstance = (GameObject)
Instantiate (m_Shell, m_FireTransform.position, m_FireTransform.rotation) ;
// Set the shell's velocity to the launch force in the fire position's forward direction.
shellInstance.GetComponent<Rigidbody>().velocity = m_CurrentLaunchForce * m_FireTransform.forward;
// Change the clip to the firing clip and play it.
m_ShootingAudio.clip = m_FireClip;
m_ShootingAudio.Play ();
NetworkServer.Spawn (shellInstance);
// Reset the launch force. This is a precaution in case of missing button events.
m_CurrentLaunchForce = m_MinLaunchForce;
}
And then call it like this:
CmdFire(m_Fired) where m_Fired has to point to the players own variable.