Can you get finger position from OnMouseDown and OnMouseUp? - c#

On a 2D layout, having a fixed in place button-like GameObject that has a script with the OnMouseDown and OnMouseUp function: touching that and moving the finger away and releasing, can that OnMouseUp get that corresponding finger release position on screen?
The catch is: its a multiplayer, many other fingers on screen!

You shouldn't be using the OnMouseDown and OnMouseUp callback functions for this because that would require that you get the touch position with Input.mousePosition on the desktops and Input.touches on mobile devices.
This should be done with the OnPointerDown and OnPointerUp functions which gives you PointerEventData as parameter. You can access the pointer position there with PointerEventData.position. It will work on both mobile and desktop devices without having to write different code for each platform. Note that you must implement the IPointerDownHandler and IPointerUpHandler interfaces in order for these functions to be called.
public void OnPointerDown(PointerEventData eventData)
{
Debug.Log("Mouse Down: " + eventData.position);
}
public void OnPointerUp(PointerEventData eventData)
{
Debug.Log("Mouse Up: " + eventData.position);
}
If you run into issues with it, you can always visit the troubleshooting section on this post to see why.

short answer yes. however this becomes a little muddled with multi-touch. basically you can get the mouse or finger position at any time. you can track it in real time if you want and display the position of the mouse at all times pretty easily. the Input.MousePosition variable is easy to grab. the problem comes when you have more than one mouse or finger in that you need to track which finger is up, and which ones are still down. still, if you set it up right it should work, it will just take more effort. my advice though,if your handling more than one touch, is to use the Standalone MultiTouch Input Module. its free in the asset store, and ive included a link for you. it's pretty straightforward.

Related

Bounding Player to path Unity

Lets say I have a path, could be a line or a curve in Unity and I wanted to allow my player to snap to that path and move along it how would I achieve this. This would look like either a cover system or something similar to how ledges work in Sly Cooper.
While if you want your player's movement to follow a path, there is a technique called WayPoint, which is simply using the build-in navigation system in Unity. There exist so many articles that teaching you how to manually build such a Way Point System, and at the same time, you can get a SimpleWayPoint module from the online assets store.
But if your player is controlled by user, which means the user can control it's movement, let's say pressing 'W' button meaning moving forward while pressing 'S' button meaning moving backwards, and the leftwards and rightwards movement are constrained by the path you defined, unfortunately I didn't find any mature technology which allows you realize this constrains.
However, in the project I accomplished several months ago, I manually handle the coordinates successfully, since I create a city whose streets strictly observe horizontal and vertical grid lines, I can easily controls the target gameobject's X and Y coordinates, by either using the build-in freeze coordinates API or manually reseting the axis values over and over again in the Update() method. So maybe you could using the Update() method to constrain the targets' coordinates by dynamically calculating it's coordinates using the functions representing the path, but I can imagine how complex it would be.
You could try using a navmeshagent, which will be enabled as soon the player would like to use this "auto movement". In your code you can set the destination and in your scene you can create a navmesh, which can be whatever shape you like
You can do it by:
Mark the position from where you want your player NOT to move forward/cross the line.
In your script, mention the position.
LOGIC
if, player position > position marked
then stop the movement,
SYNTAX
private void update(){
if(transform.position.x < -10){
transform.position = new Vector3(-10, transform.position.y, transform.position.z); // Here you can either use `Vector3` or `Vector2` according to your game
}
}

Unity ARCore simple object movement across network

I'm currently developing an android application using Google's ARCore and cloud anchors and I am having a problem with getting objects to change location across the network on button click. The problem has to be somewhere surrounding the actual button because I know that the button runs the function but it doesn't do what I want it to do, if I call the function from player script it works fine (but I need this to work on button click!) and I've been stuck with it for days now. Some enlightenment here would be very appreciated. If anyone's wondering, I'm simply trying to edit (as I'm learning) the example code of cloud anchors therefore the source code belongs to Google.
[Code I'm stuck with][1]
What I'm trying to achieve is a simple transform.Translate on an object and make this move visible to clients.
Things to note:
Button has network identity
Different functions to make any kind of movement/rotation
Player prefab has network identity
object prefabs in question have network transform component
My code are presented below:
[Command]
public void CmdSpawnStar(Vector3 position, Quaternion rotation)
{
//if conditional that stops further object placement until touchCounter is reset to a 0
if (touchCounter == 0)
{
// the object at hit pose
PlaceableModel = Instantiate(StarPrefab, position, rotation);
PlaceableModel.transform.Rotate(0, k_ModelRotation, 0, Space.Self);// Compensate for the hitPose rotation facing away from the raycast (i.e. camera).
NetworkServer.Spawn(PlaceableModel);
touchCounter++;
}
}
[Command]
public void CmdFlyUp()
{
_ShowAndroidToastMessage("UP button is pressed");
PlaceableModel.transform.Translate(0, 0.1f, 0);
}

Removing 2D objects on touch using Raycasting?

Ive been trying to use raycasting for the better part of a day now to remove 2D objects . I know how to use the OnMouseDown method to effectivelly do the same thing and ive been using it so far. But ive read that using raycastign is much more efficient then using the OnMOuseDown method since the OnMouseDOwn method was designed specifically for mouse clicks. Looking over tutorials aswell as reading the forums ive seen people using different raycasting techniques, classes and methods available in the Unity libraries but these are mostly used for 3D objects. Since I'm designing a 2D game I want to find out how to do it for 2D objects. Ive tried severl things in order to get it to work but nothign seems to work:
Ive tried using Raycasthit2D, Raycast2D and nothing seems to work
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class TouchTest : MonoBehaviour
{
void Start()
{
}
//public Vector2 direction;
void Update()
{
//Cast a ray in the direction specified in the inspector.
RaycastHit2D hit =
Physics2D.Raycast(this.gameObject.transform.position,
Input.GetTouch(0).position);
//If something was hit.
if (hit.collider != null)
{
//Display the point in world space where the ray hit the collider's surface.
Debug.Log("We Hit something");
}
}
}
The result should be that it outputs "we hit something" on the console when I touch an object on Unity Remote but it doesn't do anything except saying that my index for Input.GetTouch(0).position is out of bounce. Despite the fact that it says this it often says this but for other code, it still manages to perform what I want, but for this code, it doesn't work and still says the index is out of bounce.
The error you are receiving is because when the function is being called, the mouse is not clicked.
You must do this in the OnMouseDown method or put it in an if statement that only allows it to run if the mouse is actually clicked.
A good tutorial on this can be found here.
The best way (if you are only using 2D) is to check if the mouse click is in the shape when it is cliked:
Check when the mouse is clicked and get its position.
Get the rectangle of the body and compare it to the mouse position.
If the body's rectangle contains the mouse position, the mouse has clicked the body.

Mouse up vs. touch up in Unity

I have a bit of an issue. I can't seem to find a good way to distinguish whether Unity is firing a mouse up vs. a touch up event. By touch up I mean when the user releases their finger from the screen after touching something on a touch display.
The problem is this: I have an issue with game logic where the user both lets go of something they've selected with their mouse and SIMULTANEOUSLY touches / releases their touch to click a UI button (in other words, I'm trying to detect a mouse release only, or, a touch release only - how can I do that?). I've tried using both Input.GetMouseButtonUp(0) as well as Input.GetMouseButton(0), but both change their state if its a touch up or a mouse button up event.
So how do you properly distinguish between the two? I even tried using the Touch.fingerId property which works well to track / distinguish ONLY touch events, but, the problem is then that I still cannot distinguish ONLY mouse up with Input.GetMouseButton(0) since it fires even in the event of a touch up.
Does anyone out there happen to know what the proper way would be to detect just a mouse release, or just a touch release, separately, in Unity?
Edit:
Someone didn't understand the problem at hand, so let's assume for a second you have a desktop device with touch support. Add this script:
void Update () {
if (Input.GetMouseButtonDown(0))
Debug.Log("Mouse button down.");
if (Input.GetMouseButtonUp(0))
Debug.Log("Mouse button up.");
}
If you click and HOLD with your mouse, while holding, you can touch and release with your finger, which will log Mouse button up., despite the fact that you did not release the mouse button. How can I distinguish between a mouse up vs. a touch release?
For desktops:
Mouse Down - Input.GetMouseButtonDown
Mouse Up - Input.GetMouseButtonUp
Example:
if (Input.GetMouseButtonDown(0))
{
Debug.Log("Mouse Pressed");
}
if (Input.GetMouseButtonUp(0))
{
Debug.Log("Mouse Lifted/Released");
}
For Mobile devices:
Touch Down - TouchPhase.Began
Touch Up - TouchPhase.Ended
Example:
if (Input.touchCount >= 1)
{
if (Input.touches[0].phase == TouchPhase.Began)
{
Debug.Log("Touch Pressed");
}
if (Input.touches[0].phase == TouchPhase.Ended)
{
Debug.Log("Touch Lifted/Released");
}
}
Now, if you are having issues of clicks going through UI Objects to hit other Objects then see this post for how to properly detect clicks on any type of GameObject.
I figured it out.
I was missing this call: Input.simulateMouseWithTouches = false;
Seems to work if you set this on a Start() event only (I could be wrong, it may be you need to set it before the first Input accessor on update, test it out I guess). Otherwise, Unity simulates mouse events from touch events. I can only speculate they do this in order to make it easier for game developers to write once and play cross-platform.
Note: Once its set, its set globally, so you’ll need to unset it even in a different scene you load later on.

Hide and seek game and raycasting

I am working on multiplayer collaboration in Unity3d using Smartfox Server2x.
But I wish to change it in to a hide and seek game.
When the hider (third person controller) presses the button "seek me" the seeker tries to find the hider. But I don't know How can I identify when a seeker sees the hider. Is it possible using Raycasting. If yes how? Please help me.
void Update () {
RaycastHit hit;
if(Physics.Raycast(transform.position,transform.forward,out hit,50))
{
if(hit.collider.gameObject.name=="Deccan Peninsula1")
{
Debug.Log("detect.................");
}
if(hit.collider.gameObject.tag =="Tree")
{
Debug.Log("detect.........cube........");
//Destroy(gameObject);
}
}
From Unity Answers by duck:
There's a whole slew of ways to achieve this, depending on the precise
functionality you need.
You can get events when an object is visible within a certain camera,
and when it enters or leaves, using these functions:
OnWillRenderObject, Renderer.isVisible,
Renderer.OnBecameVisible, and OnBecameInvisible
Or you can calculate whether an object's bounding box falls within the
camera's view frustum, using these two functions:
GeometryUtility.CalculateFrustumPlanes
GeometryUtility.TestPlanesAABB
If you've already determined that the object is within the camera's
view frustum, you could cast a ray from the camera to the object in
question, to see whether there are any objects blocking its view.
Physics.Raycast
You could do many things to find out if a seeker has found the hider. I am going to suggest how I would do it and I will try to make the idea/code as efficient as possible.
Each GameObject knows its position via the transform component. You can check how close one object is from the other by creating a ratio and comparing how close they are from each other. The moment both objects are close to each other then you enter a new state. In this state you will fire a RayCast only when the direction/angle of view of the seeker changes. So think of it this way, your seeker is looking around and as he is spinning he is firing the RayCast. The main idea is not to fire way too many RayCasts all the time which will hinder performance. Make sure your RayCast ignores everything except who you are looking for.
If you get stuck you can ask a more specific question, probably regarding RayCast and how to make them ignore walls or not shoot through walls, or maybe you discover that solution on your own.
Good luck!

Categories

Resources