I am currently trying, to detect if a touch input hits a 3D Object in my ARCore scene. I simply edited the HelloAR sample script, to use a custom prefab instead of the andy object, and after i have spawned it, i want to be able to touch it. The prefab consists of six 3D platforms, each with a different name and a box collider. The following code gives very weird results.
if (Frame.Raycast(touch.position.x, touch.position.y, raycastFilter, out hit))
{
// Use hit pose and camera pose to check if hittest is from the
// back of the plane, if it is, no need to create the anchor.
if ((hit.Trackable is DetectedPlane) &&
Vector3.Dot(FirstPersonCamera.transform.position - hit.Pose.position,
hit.Pose.rotation * Vector3.up) < 0)
{
Debug.Log("Hit at back of the current DetectedPlane");
}
else
{
if (!IsPointerOverUIObject())
{
if(!platsSpawned)
{
// Instantiate platforms at the hit pose.
var platforms = Instantiate(platformPrefab, new Vector3(hit.Pose.position.x, hit.Pose.position.y + offsetY, hit.Pose.position.z), hit.Pose.rotation);
// Create an anchor to allow ARCore to track the hitpoint as understanding of the physical
// world evolves.
var anchor = hit.Trackable.CreateAnchor(hit.Pose);
// Make platforms a child of the anchor.
platforms.transform.parent = anchor.transform;
platsSpawned = true;
}
else if (platsSpawned)
{
//Ray raycast = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
Ray raycast = FirstPersonCamera.ScreenPointToRay(Input.GetTouch(0).position);
RaycastHit raycastHit;
if (Physics.Raycast(raycast, out raycastHit))
{
try
{
debugLabel.text = raycastHit.collider.name;
var obj = GameObject.Find(raycastHit.collider.name);
obj.GetComponent<SomeScript>().DoeSmthng();
}
catch (Exception ex)
{
Debug.Log(ex.Message);
}
}
}
}
}
}
Running this code, it will sometimes not hit any platform (even though you are clearly touching them), sometimes it will detect the touch correctly and sometimes it will just hit any of the other platforms in the prefab (even when only one is visible at the moment).
I feel like there is something weird going on with the Raycast. I tried casting with "Camera.main" and "FirstPersonCamera" but the results are pretty much the same.
Any ideas on why this is happening? Does someone have a proper example or can help me to correct my code ?
EDIT: I found out that the RayCasts work as long as the platform Objects are not children of the anchor. I that maybe the Objects might be turned into "Trackables", but i am not sure how i would work with those.
Related
for my bachelor thesis im augmenting a physical paper map. Therefor as written in the title im using unity in combination with vuforia for imagetarget detection and further functions.
So far so good.
Now the problem:
Im using cube elements that are augmented beside the map as interaction elements to filter the projected content. Those cubes have box colliders on em. I attached the following "buttonController" script to my AR camera which should handle the Raycast hits on those cubes and trigger further functions.
using UnityEngine;
using UnityEngine.Events;
using UnityEngine.EventSystems;
public class ButtonController : MonoBehaviour
{
public AudioClip[] audioClips;
public AudioSource myAudioSource;
private string btnName;
private GameObject[] stations;
private void Start()
{
myAudioSource = GetComponent<AudioSource>();
}
private void Update()
{
if (Input.GetMouseButtonDown(0))
{
//Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
Vector3 tapPositionFar = new Vector3(Input.mousePosition.x, Input.mousePosition.y, Camera.main.farClipPlane);
Vector3 tapPositionNear = new Vector3(Input.mousePosition.x, Input.mousePosition.y, Camera.main.nearClipPlane);
Vector3 tapPosF = Camera.main.ScreenToWorldPoint(tapPositionFar);
Vector3 tapPosN = Camera.main.ScreenToWorldPoint(tapPositionNear);
int layerMask = LayerMask.GetMask("Button", "Pin");
RaycastHit hit;
if (Physics.Raycast(tapPosN, tapPosF - tapPosN, out hit, Mathf.Infinity, layerMask))
{
btnName = hit.transform.name;
Debug.Log("NAME OF HIT TARGET: " + hit.transform.name);
myAudioSource.clip = audioClips[0];
myAudioSource.Play();
switch (btnName)
{
case "buttonRoute1":
Debug.Log("In: CASE 1");
playAnimation(btnName);
stations = GameObject.FindGameObjectsWithTag("Route1");
Debug.Log(stations);
LineRenderer lineRenderer = GetComponent<LineRenderer>();
foreach (GameObject station in stations)
{
MeshRenderer mRenderer = station.GetComponent<MeshRenderer>();
if (mRenderer.enabled == true)
{
mRenderer.enabled = false;
}
else
{
mRenderer.enabled = true;
}
}
return;
case "buttonRoute2":
Debug.Log("In: CASE 2");
playAnimation(btnName);
return;
case "buttonRoute3":
Debug.Log("In: CASE 3");
playAnimation(btnName);
return;
case "buttonRoute4":
Debug.Log("In: CASE 4");
playAnimation(btnName);
return;
case "buttonRoute5":
Debug.Log("In: CASE 5");
playAnimation(btnName);
return;
case "buttonRoute6":
Debug.Log("In: CASE 6");
playAnimation(btnName);
return;
}
}
else
{
//Debug.DrawRay(ray.origin, ray.direction * 1000, Color.white);
Debug.Log("NOTHING HAPPENED WITH MOUSE");
}
}
}
void playAnimation(string btnName)
{
GameObject currentGameObject = GameObject.Find(btnName);
Animation anim = currentGameObject.GetComponent<Animation>();
anim.Play();
}
}
This cubes are set to special XZ coordinates in the unity scene and do not get moved programmatically during runtime in another script.
I also augment pins which are placed on the 0/0/0 of an imagetarget and get repositioned during runtime to new XZ coordinates calculated by their LatLong coordinates. Their Raycast hits are detected aswell by the above script.
When i run my application in the editor everything works perfectly fine. Each of the elements, the cubes and the pins, get hit by the raycast exactly like they should.
So far so good.
When i build the android version and install it to my xiaomi android phone the raycasts dont hit like they should. I dont get any hit at all when i touch the cubes on their original position beside the map. BUT i get hits in the blue marked area seen in the picture below.
My unity scene showing the cube buttons
It looks like the boxcolliders of the cubes on the side get all move onto the 0/0/0 position of my imagetarget during runtime although the models keep their original position.
The pins dont get any hits at all although they have active mesh colliders on them and get hit in the editor too.
Im extremely desperate now since ive already tried like each and every advice i found in thousands of threats.
I tried setting up a new AR camera element, move the script to another object, change hierarchy, change colliders, reset colliders, different raycast scripts from mouseclick to touch, use a different device, ... and so on.
I would appreciate it SO MUCH if anyone of you might have a hint what the problem could be.
If more information would be needed just let me know!
Thanks so far.
Greetings
EDIT:
#Philipp Lenssen and others
Like i already said i think the colliders move during the runtime. I debugged the colliders of buttonRoute 1 and buttonRoute 2 while hitting the blue marked zone from screenshot 1.
I get hits and the colliders position is completely different.
marked green and yellow Cubes are cubes to visualize the boxcolliders of Route 1 and Route 2 Button cube beside the map
They are not even at the same wrong position. One is above the 0 of Y axis and one underneath. Their X and Z coordinates are completely wierd. They should keep the position of the Green Route 2 and Yellow Route 1 button 3d elements shown beside the map!
I have NO idea why this happens....
EDIT:
Ive rebuild the entire project to a new clean one. I also updated my Android SDK to the latest version. Now the app works BUGFREE!
I am developing a game where objects fall from the top of the screen, and you try to and tap on them before they reach the bottom of the screen. The code works fine when I played the game in the editor (with the touch controls swapped to mouse controls), except when I run the game on a phone, the game only seems to register a successful hit if you tap slightly in front of the object in the direction that it is traveling, and does not register a hit if you tap towards the back end or center of the object. I have built and ran the game over 10 times now, each time trying to fix this issue but nothing seems to help. My theory at the moment is that my code for the touch controls have too much going on and/ or have redundancies and by the time it checks whether or not an object is at the position of the touch, the object has moved to a different location. Any thoughts on why the hit boxes are off, and is there a better way to do hit detection with touch screen?
void FixedUpdate()
{
if (IsTouch())
{
CheckTouch(GetTouchPosition());
}
}
// Returns true if the screen is touched
public static bool IsTouch()
{
if (Input.touchCount > 0)
{
if (Input.GetTouch(0).phase == TouchPhase.Began)
{
return true;
}
}
return false;
}
// Gets the position the touch
private Vector2 GetTouchPosition()
{
Vector2 touchPos = new Vector2(0f, 0f);
Touch touch = Input.GetTouch(0);
if (Input.GetTouch(0).phase == TouchPhase.Began)
{
touchPos = touch.position;
}
return touchPos;
}
// Checks the position of the touch and destroys the ball if the
ball is touched
private void CheckTouch(Vector2 touchPos)
{
RaycastHit2D hit =
Physics2D.Raycast(Camera.main.ScreenToWorldPoint(
(Input.GetTouch(0).position)), Vector2.zero);
if (hit.collider != null)
{
destroy(hit.collider.gameObject);
}
}
I have and raycast, and a rayCastHit. Whenever the user click on the fire button. It will move the FPS charater to the location where the rayCastHit is. lightRetical is a gameObject variable which is a spotlight that shows where the rayCastHit is.
The Funny thing is, it works when I click play in unity. But whenever I build to my android phone it doesn't work. I am unable to move the fps character.
The FPS character I used, is from the standard asset "character" and the codes I add them to the Update() method.
RaycastHit seen;
Ray raydirection = new Ray(transform.position, cam.transform.forward);
int sightlength = 5;
if (Physics.Raycast(raydirection, out seen, sightlength))
{
if (seen.collider.tag == "Floor" && Input.GetButtonDown("Fire1")) //in the editor, tag anything you want to interact with and use it here
{
Vector3 relativePoint;
lightRetical.SetActive(true);
relativePoint = seen.point;
relativePoint.y = 2.0f;
bodychar.transform.position = relativePoint;
}
else
{
lightRetical.SetActive(true);
Vector3 relativePoint;
relativePoint = seen.point;
relativePoint.y = 2.64f;
lightRetical.transform.position = relativePoint;
}
}
else
{
lightRetical.SetActive(false);
}
I suggest casting the ray from the camera position forward. If the player rotates his head the raycast will follow. I'm currently developing an app for VR and this seems like the best solution. You can use collision layers to filter the raycast. I would also print the hit.transform to the console, to check what the raycast is hitting. Hope this helps.
I have a 2.5d type game, with falling blocks (like tetris) and orthographic projection setup (I've setup my game as "3D" type).
I've defined a block like this:
public class Block{
public Block () {
this.gameObj = GameObject.CreatePrimitive (PrimitiveType.Cube);
}
public GameObject gameObj;
}
I have a BoardMgr (GameObj + script only component), where I spawn these blocks and store them in an array:
public class BoardMgr : MonoBehaviour {
protected Block[] blocks;
protected Block[,] board;
}
In BoardMgr::update(), blocks are falling down one after the other (like tetris). Now, I'd like to figure out when I click on a block, which block object is it. This is the click detection code:
if (Input.GetMouseButtonDown(0)) {
Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
RaycastHit hit;
if (Physics.Raycast (ray, out hit)) {
Debug.Log ("Ray hit block");
// How do I find which block got hit here ?
} else {
Debug.Log ("Ray missed block");
}
}
When I click on a block, I do see the ray hit block on console, but then, how do I access which "Block" object got hit ? From RayCastHit, hit object, how do I decode which Block does it reference ?
I'm new to unity (2 days old) but not new to gamedev. Trying to find my way through unity here. I'd appreciate if someone can point me in the right direction here.
//Check the GameObject by name
if (hit.collider.name == "brainydexter")
{
Debug.Log("Hit: "+hit.collider.name);
}
//Check the GameObject by tag
if (hit.collider.CompareTag("brainydexterTag"))
{
}
//Check the GameObject by GameObject instance
GameObject otherGameObject = gameObject;
if (hit.collider.gameObject == otherGameObject)
{
}
EDIT: This is what you need
Use array to loop through the blocks then compare the gameobject instance.
for (int i = 0; i < blocks.Length; i++)
{
if (hit.collider.gameObject == blocks[i].gameObj)
{
Debug.Log("Block hit is " + blocks[i].gameObj);
break;
}
}
In your if block use the hit parameter to detect which object was hit , because it has the information what the ray collided with , example hit.collider.tag == "myBlock"
I like your question. And I think I got the key of what make the above 2 answers be not you want.
Usually, unity developers want to make this kind of block, they will create a cube and add script as element to it to make a prefab.
However, What you do to define a block is more programmer, creating a C# class with a primitive cube.
I think one way is to check the block's position.x and z with the position of each block in the array .because the blocks are dropping, then just y is changing, to check x and z is a solution.
Hope help.
I'm writing a 2D game and I'm trying to get moving platforms to work. After doing some previous investigation, I have it ALMOST working. The idea is to have 2 platform objects with colliders: 1 a visible object, the other an invisible object with isTrigger set (since the player would just go through a trigger). The code for the Moving Platform child (the trigger one) is set here.
using UnityEngine;
using System.Collections;
public class MovingPlatformChild : MonoBehaviour
{
public string parentPlatform = "";
void Start ()
{
transform.parent = GameObject.Find(parentPlatform).transform;
}
// Update is called once per frame
void Update ()
{
}
void OnTriggerEnter(Collider playerObject)
{
Debug.Log ("enter moving platform");
if(playerObject.gameObject.name.Contains("Player"))
{
playerObject.transform.parent = gameObject.transform;
}
}
int i = 0;
void OnTriggerStay(Collider playerObject)
{
Debug.Log ("stay" + i++);
if(playerObject.transform.position.y >= transform.position.y)
{
playerObject.transform.parent = gameObject.transform;
}
else
{
playerObject.transform.parent=null;
}
}
void OnTriggerExit(Collider playerObject)
{
Debug.Log ("EXIT");
if(playerObject.gameObject.name.Contains("Player"))
{
playerObject.transform.parent=null;
}
}
}
The Start() function just makes it a child of the visible platform. This can probably be done right in the Unity editor as well, instead of through code.
The OnTriggerEnter function adds the player object as a child of the trigger platform object, which is a child of the visible platform. So they should all move together.
The OnTriggerStay is an attempt to verify that this remains true only while the player is on the top of the platform. While the player is within the trigger, if the player is on top of the platform, then it remains attached. Otherwise, it's not. This is so that nothing happens on the bottom end.
The OnTriggerExit function just removes the player object as a child when it exits the trigger.
This is somewhat working (but we know somewhat isn't good enough). It works sometimes, but the player will be very jittery. Also, on the way down while standing atop the platform, the TriggerStay function doesn't appear to be called (implying the player is no longer within the trigger). This is observed through my Debug "stay" statement. Finally, sometimes the player will also fall straight through the platform.
What in this code would allow the player to fall through the platform, or be so jittery on the way up? Am I missing something crucial? If you need any more code, please let me know.
Below is the code for the movement of the non-trigger platform (the parent of the trigger platform and in an identical position). I will also share the Player's Update function after that.
void Start ()
{
origY = transform.position.y;
useSpeed = -directionSpeed;
}
// Update is called once per frame
void Update ()
{
if(origY - transform.position.y > distance)
{
useSpeed = directionSpeed; //flip direction
}
else if(origY - transform.position.y < -distance)
{
useSpeed = -directionSpeed; //flip direction
}
transform.Translate(0,useSpeed*Time.deltaTime,0);
}
And now the player code:
void Update()
{
CharacterController controller = GetComponent<CharacterController>();
float rotation = Input.GetAxis("Horizontal");
if(controller.isGrounded)
{
moveDirection.Set(rotation, 0, 0); //moveDirection = new Vector3(rotation, 0, 0);
moveDirection = transform.TransformDirection(moveDirection);
//running code
if(Input.GetKey(KeyCode.LeftShift) || Input.GetKey(KeyCode.RightShift)) //check if shift is held
{ running = true; }
else
{ running = false; }
moveDirection *= running ? runningSpeed : walkingSpeed; //set speed
//jump code
if(Input.GetButtonDown("Jump"))
{
//moveDirection.y = jumpHeight;
jump ();
}
}
moveDirection.y -= gravity * Time.deltaTime;
controller.Move(moveDirection * Time.deltaTime);
}
EDIT: I've added the specifications for the platforms and player in this imgur album:
http://imgur.com/a/IxgyS
This largely depends on the height of your trigger box, but it's worth looking into. Within your TriggerStay, you've got an IF statement concerning the player y coordinates. If the trigger box is fairly large and the platform's speed fast enough, on the way up and between update ticks the player Y coords could momentarily be smaller than the trigger Y coords. This would lead to him losing the parentage, only to regain it a few ticks later. This might be the cause of the 'jittering'.
The problem I was having included
The moving platform was written using Translate. I rewrote it using a rigidbody and the rigidbody.Move function. This didn't immediately help, but...
I realized the CharacterMotor script (Unity provides this) that I had attached to the player included moving platform support. I set the MovementTransfer value to PermaLocked, and also unchecked the "Use FixedUpdate" box on the script, and it now works 99% of the time. I've had one time where I did a particular behaviour and slipped through, but I can't recreate it.
Hope this helps anyone who might be looking for an answer!