I have four buttons at the bottom, notes would then fall from the top of the screen, I am not sure how to code the touchscreen part for mobile, I have it working for pc format, but I am not sure what I need to change for it to work.
I have 2 scripts, one for the buttons and then one for every note falling down.
This is what I have at the moment:
For the buttons:
void Update()
{
if(Input.GetKeyDown(keyToPress))
{
theSR.sprite = pressedImage;
}
if(Input.GetKeyUp(keyToPress))
{
theSR.sprite = defaultImage;
}
}
For the Notes:
void Update()
{
if(Input.GetKeyDown(keyToPress))
{
if(canBePressed)
{
gameObject.SetActive(false);
//GameManager.instance.NoteHit();
//Determine if player hits the note normal, good, perfect
if (Mathf.Abs(transform.position.y) > 0.25)
{
GameManager.instance.NormalHit();
Instantiate(hitEffect, transform.position, hitEffect.transform.rotation);
}
else if (Mathf.Abs(transform.position.y) > 0.05f)
{
GameManager.instance.GoodHit();
Instantiate(goodEffect, transform.position, goodEffect.transform.rotation);
}
else
{
GameManager.instance.PerfectHit();
Instantiate(perfectEffect, transform.position, perfectEffect.transform.rotation);
}
}
}
}
This works perfectly if it was PC only but I just don't know what to change for it to buttons to react exactly the same way in mobile as it would on PC
Well the answer depends on how you intend to get the user input on mobile. There are 2 ways to do this if I think of off my head.
The first would be to get input touch screen position and check if the position on the x axis is in one of 4 compartments for the nots. You can get that with this
The second, you can actually use the UI buttons from Unity and they work on all platforms. So you can test on PC and the upload to your phone to have it working the same way.
Hope any of this helps you!
Related
I have build a driver simulator using Unity and I use as steering wheel the Logitech G29 controller.
So in my project to break and throttle I configured this:
Vertical1 is used to Throttle function and Vertical2 is used to Break function. This configuration are working now.
Now I need to configure also another controller (HC1 3DRap). This is an Hand Controller. So I checked it on windows device and I can see this:
Rotation Axis X and Rotation Axis Y have a value in sleep mode (without press the two levels).
Now I need to integrate also this new Controller in my project. So I try to make this:
In this mode if I try to check value of Y axis value with the follow code ( in this moment I cannot press the levers) :
Debug.Log("Input debug frenata: " + Input.GetAxis("Vertical2"));
I can display this:
If I try to press a levers, I can display this values
In this mode with thie new controller join on the system I m not able to run the car, because I think that there is every time the break pressed.
Could you suggest me, how can I fixed this bug ?
I run into a similar problem some time ago. I found out, that the axis I was actually using was not the one I expected.
Let's say you have a joystick and have a separate "POV-Stick" on it. When you use the POV-Stick you might be moving the whole joystick and therefore change the main axis of the joystick. If you are just watching the main axis input, it looks like that is the input of your POV-Stick, but actually isn't. So make sure the input you read is the correct one.
Then you have another problem: Not every joystick, steer etc. is mapping it's inputs to the same axis. So if you buy 2 more devices, they might be on a different axis as well. If you try to handle that on your own, you go crazy.
There is a unity forum about that topic (and other related problems).
And I found that there are some unity plugins, that could probably solve your problem:
https://github.com/speps/XInputDotNet
https://github.com/JISyed/Unity-XboxCtrlrInput
I hope you can solve your problem with these inputs (please let us know, if you do).
It seems that since you're seeing values even when the input device is not being used, that you'll need to "zero" the device and use dead zones like you might for a joystick or controller.
In the Player or Input script, in OnEnable/Awake/Start (whichever works best for you), set the "zero" value for the device in a field. You can run the assignment of the Vertical2 zero value in a method, and then allow the player to recalibrate during play with a button that invokes that function.:
private float _vertical2Zero = 0.0f;
void Start()
{
this.CalibrateVertical2Zero();
// ... more code ...
}
private void CalibrateVertical2Zero()
{
this._vertical2Zero = Input.GetAxis("Vertical2")
{
Then, when you're checking the value later, test it against the "zero" value, and apply some deadzones if desired:
private float _vertical2Deadzone = 0.05f;
void HandleInput()
{
float newVertical2Value = Input.GetAxis("Vertical2");
bool vertical2Low = newVertical2Value <= ( this._vertical2Zero - _vertical2Deadzone );
bool vertical2High = newVertical2Value >= ( this._vertical2Zero + _vertical2Deadzone );
if( vertical2Low || vertical2High )
{
// Input detected on Vertical2, accounting for the zero and deadzone
}
}
This can help with your problem.
How to manage input in Unity
void Update()
{
if (Input.GetKeyDown(KeyCode.Space))
{
// Spacebar was pressed
}
if (Input.GetMouseButtonDown(0))
{
// Left mouse was pressed
}
}
The new Input System
using UnityEngine;
using UnityEngine.InputSystem;
public class ReportMousePosition : MonoBehaviour
{
void Update()
{
Vector2 mousePosition = Mouse.current.position.ReadValue();
if(Keyboard.current.anyKey.wasPressedThisFrame)
{
Debug.Log("A key was pressed");
}
if (Gamepad.current.aButton.wasPressedThisFrame)
{
Debug.Log("A button was pressed");
}
}
}
Hope it helped
So, I've been trying to learn Unity these past couple of weeks. As a starter project I decided to try and replicate the mobile game Pop the Lock. Right now, I'm having some problems with my keyboard inputs that I just can't figure out how to solve.
This is how my game screen looks: GameScreen. The red bar is the Player and the yellow circle is the Target. As the game progresses, the Player rotates around the ring towards the Target.
Basically, if the Player and the Target are touching AND the Player presses the Space Key at the exact same time, the Player is supposed to gain a point. The Target is also supposed to be destroyed, and a new Target is supposed to randomly be spawned in somewhere on the game screen. At the moment, this system works ... most of the time. About 80% of the time my code operates as it should, but around 20% of the time my code doesn't register when player presses the space key as the two collide. Here's my code:
public class Target: MonoBehaviour {
public GameObject target;
void Update () {
if (Input.GetKeyDown("space")) {
Debug.Log("SPACE PRESSED!!");
}
}
private void OnTriggerEnter2D (Collider2D collision) {
Debug.Log("Collision!");
}
private void OnTriggerStay2D(Collider2D other) {
// This is the part that sometimes isn't registering:
if (Input.GetKeyDown("space")) {
Debug.Log("HIT!!");
Score.score++;
// Code to spawn new Target on random place in the ring:
// Seems to be working as intended:
float distance = 2.034822f;
float x = Random.Range(-2f, 2f);
float y = Mathf.Pow(distance,2) - Mathf.Pow(x,2);
y = Mathf.Sqrt(y);
float[] options = {y, -y};
int randomIndex = Random.Range(0, 2);
y = options[randomIndex];
Vector3 vector = new Vector3(x, y, 0);
GameObject newTarget = Instantiate(target, vector, Quaternion.identity);
Destroy(gameObject);
}
}
}
As you can see I have Log statements that print something every time the player and the target are touching, every time the space key is pressed, and every time the space key is pressed while they are touching. This is how the console looks like when everything is working : Image One. This is what the console looks like when my code isn't working : Image Two.
So even when it isn't working, the collision and the key press are still registered at the exact same time. But for some reason the hit itself isn't registered (so the if condition isn't passed). Because of this I'm quite confident that it's not just input delay or me pressing the key at the wrong time. As I mentioned above, this only happens about 20% of the time, which makes it even more confusing to me. The Target has a trigger collider2D and it also has a dynamic RigidBody2D with gravity scale set to 0 (as I was told it should). Any help would be greatly appreciated.
(How my collider and rigidbody look: Image)
Something you can do is to set a flag to becomes true while you are pressing the key in the update loop, so the update loop will convert to something like:
private bool isSpacePressed = false;
update() {
isSpacePressed = false;
if (Input.GetKeyDown(KeyCode.Space)){
isSpacePressed = true;
}
}
so every loop the flag will be set to false except if you are pressing the space bar and the OnTriggerStay2D while become something like
OnTriggerStay2D () {
if(isSpacePressed){
.. do magic..
}
}
Look that I replace the Input.GetKeyDown('space') to Input.GetKeyDown(KeyCode.Space) I recommend using this one to avoid typing errors
for my bachelor thesis im augmenting a physical paper map. Therefor as written in the title im using unity in combination with vuforia for imagetarget detection and further functions.
So far so good.
Now the problem:
Im using cube elements that are augmented beside the map as interaction elements to filter the projected content. Those cubes have box colliders on em. I attached the following "buttonController" script to my AR camera which should handle the Raycast hits on those cubes and trigger further functions.
using UnityEngine;
using UnityEngine.Events;
using UnityEngine.EventSystems;
public class ButtonController : MonoBehaviour
{
public AudioClip[] audioClips;
public AudioSource myAudioSource;
private string btnName;
private GameObject[] stations;
private void Start()
{
myAudioSource = GetComponent<AudioSource>();
}
private void Update()
{
if (Input.GetMouseButtonDown(0))
{
//Ray ray = Camera.main.ScreenPointToRay(Input.mousePosition);
Vector3 tapPositionFar = new Vector3(Input.mousePosition.x, Input.mousePosition.y, Camera.main.farClipPlane);
Vector3 tapPositionNear = new Vector3(Input.mousePosition.x, Input.mousePosition.y, Camera.main.nearClipPlane);
Vector3 tapPosF = Camera.main.ScreenToWorldPoint(tapPositionFar);
Vector3 tapPosN = Camera.main.ScreenToWorldPoint(tapPositionNear);
int layerMask = LayerMask.GetMask("Button", "Pin");
RaycastHit hit;
if (Physics.Raycast(tapPosN, tapPosF - tapPosN, out hit, Mathf.Infinity, layerMask))
{
btnName = hit.transform.name;
Debug.Log("NAME OF HIT TARGET: " + hit.transform.name);
myAudioSource.clip = audioClips[0];
myAudioSource.Play();
switch (btnName)
{
case "buttonRoute1":
Debug.Log("In: CASE 1");
playAnimation(btnName);
stations = GameObject.FindGameObjectsWithTag("Route1");
Debug.Log(stations);
LineRenderer lineRenderer = GetComponent<LineRenderer>();
foreach (GameObject station in stations)
{
MeshRenderer mRenderer = station.GetComponent<MeshRenderer>();
if (mRenderer.enabled == true)
{
mRenderer.enabled = false;
}
else
{
mRenderer.enabled = true;
}
}
return;
case "buttonRoute2":
Debug.Log("In: CASE 2");
playAnimation(btnName);
return;
case "buttonRoute3":
Debug.Log("In: CASE 3");
playAnimation(btnName);
return;
case "buttonRoute4":
Debug.Log("In: CASE 4");
playAnimation(btnName);
return;
case "buttonRoute5":
Debug.Log("In: CASE 5");
playAnimation(btnName);
return;
case "buttonRoute6":
Debug.Log("In: CASE 6");
playAnimation(btnName);
return;
}
}
else
{
//Debug.DrawRay(ray.origin, ray.direction * 1000, Color.white);
Debug.Log("NOTHING HAPPENED WITH MOUSE");
}
}
}
void playAnimation(string btnName)
{
GameObject currentGameObject = GameObject.Find(btnName);
Animation anim = currentGameObject.GetComponent<Animation>();
anim.Play();
}
}
This cubes are set to special XZ coordinates in the unity scene and do not get moved programmatically during runtime in another script.
I also augment pins which are placed on the 0/0/0 of an imagetarget and get repositioned during runtime to new XZ coordinates calculated by their LatLong coordinates. Their Raycast hits are detected aswell by the above script.
When i run my application in the editor everything works perfectly fine. Each of the elements, the cubes and the pins, get hit by the raycast exactly like they should.
So far so good.
When i build the android version and install it to my xiaomi android phone the raycasts dont hit like they should. I dont get any hit at all when i touch the cubes on their original position beside the map. BUT i get hits in the blue marked area seen in the picture below.
My unity scene showing the cube buttons
It looks like the boxcolliders of the cubes on the side get all move onto the 0/0/0 position of my imagetarget during runtime although the models keep their original position.
The pins dont get any hits at all although they have active mesh colliders on them and get hit in the editor too.
Im extremely desperate now since ive already tried like each and every advice i found in thousands of threats.
I tried setting up a new AR camera element, move the script to another object, change hierarchy, change colliders, reset colliders, different raycast scripts from mouseclick to touch, use a different device, ... and so on.
I would appreciate it SO MUCH if anyone of you might have a hint what the problem could be.
If more information would be needed just let me know!
Thanks so far.
Greetings
EDIT:
#Philipp Lenssen and others
Like i already said i think the colliders move during the runtime. I debugged the colliders of buttonRoute 1 and buttonRoute 2 while hitting the blue marked zone from screenshot 1.
I get hits and the colliders position is completely different.
marked green and yellow Cubes are cubes to visualize the boxcolliders of Route 1 and Route 2 Button cube beside the map
They are not even at the same wrong position. One is above the 0 of Y axis and one underneath. Their X and Z coordinates are completely wierd. They should keep the position of the Green Route 2 and Yellow Route 1 button 3d elements shown beside the map!
I have NO idea why this happens....
EDIT:
Ive rebuild the entire project to a new clean one. I also updated my Android SDK to the latest version. Now the app works BUGFREE!
I am currently trying, to detect if a touch input hits a 3D Object in my ARCore scene. I simply edited the HelloAR sample script, to use a custom prefab instead of the andy object, and after i have spawned it, i want to be able to touch it. The prefab consists of six 3D platforms, each with a different name and a box collider. The following code gives very weird results.
if (Frame.Raycast(touch.position.x, touch.position.y, raycastFilter, out hit))
{
// Use hit pose and camera pose to check if hittest is from the
// back of the plane, if it is, no need to create the anchor.
if ((hit.Trackable is DetectedPlane) &&
Vector3.Dot(FirstPersonCamera.transform.position - hit.Pose.position,
hit.Pose.rotation * Vector3.up) < 0)
{
Debug.Log("Hit at back of the current DetectedPlane");
}
else
{
if (!IsPointerOverUIObject())
{
if(!platsSpawned)
{
// Instantiate platforms at the hit pose.
var platforms = Instantiate(platformPrefab, new Vector3(hit.Pose.position.x, hit.Pose.position.y + offsetY, hit.Pose.position.z), hit.Pose.rotation);
// Create an anchor to allow ARCore to track the hitpoint as understanding of the physical
// world evolves.
var anchor = hit.Trackable.CreateAnchor(hit.Pose);
// Make platforms a child of the anchor.
platforms.transform.parent = anchor.transform;
platsSpawned = true;
}
else if (platsSpawned)
{
//Ray raycast = Camera.main.ScreenPointToRay(Input.GetTouch(0).position);
Ray raycast = FirstPersonCamera.ScreenPointToRay(Input.GetTouch(0).position);
RaycastHit raycastHit;
if (Physics.Raycast(raycast, out raycastHit))
{
try
{
debugLabel.text = raycastHit.collider.name;
var obj = GameObject.Find(raycastHit.collider.name);
obj.GetComponent<SomeScript>().DoeSmthng();
}
catch (Exception ex)
{
Debug.Log(ex.Message);
}
}
}
}
}
}
Running this code, it will sometimes not hit any platform (even though you are clearly touching them), sometimes it will detect the touch correctly and sometimes it will just hit any of the other platforms in the prefab (even when only one is visible at the moment).
I feel like there is something weird going on with the Raycast. I tried casting with "Camera.main" and "FirstPersonCamera" but the results are pretty much the same.
Any ideas on why this is happening? Does someone have a proper example or can help me to correct my code ?
EDIT: I found out that the RayCasts work as long as the platform Objects are not children of the anchor. I that maybe the Objects might be turned into "Trackables", but i am not sure how i would work with those.
I'm working on a TD game, and there's an option to change volume during gameplay. It worked fine when I first wrote it, but now it doesn't, and I'm pretty sure no one changed the relevant parts.
There are several components that I'm using.
First, there's a small script attached to the turrets that sets the AudioSource volume to the value set in PlayerPrefs. It's pretty straightforward.
void Start () {
SetVolume();
}
public void SetVolume() {
if (gameObject.GetComponent<AudioSource> () == null) {
gameObject.AddComponent<AudioSource> ();
}
gameObject.GetComponent<AudioSource>().volume = PlayerPrefs.GetFloat ("SFXVolume");
}
Then there's the script that's called from the volume settings. When the user slides the volume slider, the value is saved in PlayerPrefs, then I search for objects with the Turret tag and change their volume in a loop.
void Start () {
musSlider = GameObject.Find ("MusicSlider");
sfxSlider = GameObject.Find ("SFXSlider");
if (!(PlayerPrefs.HasKey ("MusicVolume"))) {
PlayerPrefs.SetFloat ("MusicVolume", 1.0f);
PlayerPrefs.SetFloat ("SFXVolume", 1.0f);
}
musSlider.GetComponent <Slider> ().value = PlayerPrefs.GetFloat ("MusicVolume");
sfxSlider.GetComponent <Slider> ().value = PlayerPrefs.GetFloat ("SFXVolume");
}
public void SetSFXVolume() {
PlayerPrefs.SetFloat ("SFXVolume", sfxSlider.GetComponent <Slider> ().normalizedValue);
if (!(String.Equals (SceneManager.GetActiveScene ().name, "VietrixMainMenuScene"))) {
GameObject[] turrets = GameObject.FindGameObjectsWithTag("Turret");
for (int i = 0; i < turrets.Length; i++) {
if (String.Equals (turrets [i].name, "Turret")) {
Debug.Log ("Turret name: " + turrets [i].transform.parent.gameObject.name);
} else {
Debug.Log ("Turret name: " + turrets [i].name);
}
turrets[i].GetComponent<AudioSource> ().volume = PlayerPrefs.GetFloat ("SFXVolume");
}
}
}
And here the things go wrong. I currently have 6 Turrets in the scene, and only 4 of them show up in the log, and their volume changes. All of the tags are in place, I've just checked it for the umpteenth time. Out of the 4 that show up, 3 have tags on the turrets themselves, and 1 has tags on both the turret and its parent object. Out of the 2 that don't show up, 1 has the tag on the turret itself, and the other has tags on both the turret and the parent object.
What could have happened? I'm not the only one working on this game, but the others swear they haven't touched the tags or the volume scripts.
Instead of manually changing the volume of each audio source, it is probably better to look into the AudioMixer which already does what you are trying to do.
Here's a tutorial on it.
Using the names of GameObjects to try and find the right one isn't a great idea, since names can change (e.g. when they are instantiated, they will be Turret (Clone)).
If you're really set on your solution, why not just do this in your VolumeSettings script in SetSFXVolume (assuming that the small script on your turrets is named AudioSourceBuddy):
public void SetSFXVolume() {
PlayerPrefs.SetFloat ("SFXVolume", sfxSlider.GetComponent <Slider> ().normalizedValue);
if (!(String.Equals (SceneManager.GetActiveScene ().name, "VietrixMainMenuScene"))) {
foreach(AudioSourceBuddy buddy in GameObject.FindObjectsOfType<AudioSourceBuddy >()) {
buddy.SetVolume();
}
}
}
That way you don't have to worry about looping through tags, checking names, etc. You can just find all of the sources directly.