Unity 3D interaction with UI Image - c#

I am creating a 3D game and in the UI, there is an image that I want to print different things when the user puts the cursor on different parts of the Image. In a 2D game, I would add child objects to the image and add polygon colliders to them, and use OnMouseOver() method. But as I understand, this doesn't work on UI. I also tried OnPointerEnter() method and it works but I can't split the image into different parts using this. I tried to split the image into small parts using an external tool and putting them side by side and making a whole image but Unity recognizes all images as rectangular images (I am trying to make this for Girl with a Pearl Earring painting so shapes are irregular. ). How can I do this?

While this is not the best solution, you can get the current mouse position and check first if it's "inside" the picture, and then check for different parts
(For example if the image is between [100,100] to [300,300], and one part is between [100,100] to [150,150], you can check if the mouse is between those coordinates).
So you would get the position:
Vector2 mousePosition = new Vector2(Input.mousePosition.x, Input.mousePosition.y);
float x1 = imageBoundry, x2 = imageBoundry;
float y1 = //Same but Y
if mousePosition.X between x1 and x2 && mousePosition.Y between y1 and y2
{
Check, in which part of the image the cursor is at, and decide then
}
Or, you can raycast and check the tag, and if the tag matches to the image's one, check for the specific part
ray = Camera.main.ScreenPointToRay(Input.mousePosition);
if(Physics.Raycast(ray, hit))
{
if (hit.collider.gameObject.CompareTag(YOUR_TAG))
{
Check for inner boundaries.
}
}

Related

Unity - drag to resize / scale plane based on handle position?

Alright, I cant find any example of this already done and my attempt is yielding odd results - I need to drag to resize a flattened cube (like a plane, but must have thickness so it was a cube) using a handle in the corner. Like a window on your desktop.
So far Ive created my handle plane and gotten it via script attached to my cube plane. The cube plane and the handle are children of an empty to which the scale is being applied so that the cube plane scales from left to right as desired:
That works, however my attempt at using the delta scale handle position to scale the parent empty scales either way too much or in odd directions:
Awake() {
scaleHandle = GameObject.FindGameObjectWithTag("ScaleHandle");
scaleHandleInitialPos = scaleHandle.transform.localPosition;
}
Update()
{
width += -1*(scaleHandleInitialPos.x - scaleHandle.transform.localPosition.x);
height += -1 * (scaleHandleInitialPos.y - scaleHandle.transform.localPosition.y);
transform.parent.localScale = new Vector3(width, height, thickness);
}
what is wrong here?
Hierarchy:
Transform of the child:
With updating initialPos every Update() OR doing width = scaleHandle.transform.localPosition.x / scaleHandleInitialPos.x; where the white square is the scaleHandle

Touch and Canvas element location do not match

I'm trying to create a game like WordCookie. I want to use a LineRenderer, so I cannot use the Screen Space - Overlay Canvas Renderer mode. When using either Screen Space - Camera or World View, my Touch position doesn't match the position I get from my RectTransforms.
I access my buttons' transform position by logging:
foreach (RectTransform child in buttons.transform)
{
Debug.Log("assigning" + child.transform);
}
For the touch cooridinates, I simply log the touch.position.
To clarify; I want to trigger my LineRenderrer when the distance between the position vectors is smaller than a certain float. However, whenever I tap on my button to test this, the button logs at (1.2, -2.6) and my touch at (212.2, 250.4).
What could be causing this?
touch.position returns a value using pixel coordinates.
In order to convert to world coordinates you need to use: Camera.ScreenToWorldPoint
Should be something like that:
Vector2 touch = Input.GetTouch(0);
float buttonPositionZ = button.transform.position.z;
Vector3 worldPosition = camera.ScreenToWorldPoint(new Vector3(touch.x, touch.y, buttonPositionZ));

How do I convert point to local coordinates?

when a user taps a button I want the tap to be ignored if a transparent pixel is hit and the button beneath it should receive the tap instead.
I can do this sort of behaviour in objective c within a few minutes, however, trying to do it in Unity in C# has proven impossible as there is no way to convert a point from the screen to a local point. Obviously either the center or the bottom left, or the top left corner should be the origin point (I don't know which because Unity continuously changes where the origin is depending on the phase of the moon)
I've tried looking at the IsRaycastLocationValid from the ICanvasRaycastFilter
and inside I've used both RectTransformUtility.ScreenPointToLocalPointInRectangle and RectTransformUtility.ScreenPointToWorldPointInRectangle and the result is always the same. The RectTransform I give is the one belonging to the button.
eg:
public bool IsRaycastLocationValid(Vector2 screenPosition, Camera raycastEventCamera) //uGUI callback
{
Vector2 localPoint;
RectTransformUtility.ScreenPointToLocalPointInRectangle(rectTransform, screenPosition, null, out localPoint);
Vector2 pivot = rectTransform.pivot - new Vector2(0.5f, 0.5f);
Vector2 pivotScaled = Vector2.Scale(rectTransform.rect.size, pivot);
Vector2 realPoint = localPoint + pivotScaled;
Debug.Log(screenPosition + " " + localPoint);
return false;
}
Which I got from someone else's unanswered attempt here
Unity 3d 4.6 Raycast ignore alpha on sprite
I've found this link
http://forum.unity3d.com/threads/alpha-area-on-round-gui-button-return-click-event.13608/
where someone tries to determine if the mousePosition is within the image bounds. The obvious problem being that the image has to have some corner at (100,100). Magic numbers are bad and trying to figure out how to get those coordinates is next to impossible.
The following properties on RectTransform is always the same no matter where you place the button:
anchoredPosition
anchoredPosition3D
anchorMax
anchorMin
offsetMax
offsetMin
pivot
rect
sizeDelta
as you can see, that is all the properties a RectTransform has. Therefore it is impossible to tell where a button is, and is impossible to tell where a click is in the button's coordinate space.
Can someone please tell me how to do what I need to do?
Should do the trick:
bool isScreenPointInsideRectTransform( Vector2 screenPoint, RectTransform transform, Camera canvasCamera )
{
Vector2 localPoint;
RectTranformUtility.ScreenPointToLocalPointInRectangle( transform, screenPoint, canvasCamera, out localPoint );
return transform.rect.Contains( localPoint );
}
Thats exactly why you dont find the location of the point inside of the button itself... but you compare the location of the point inside of the parent rect of the button and the buttons localPosition itself.
That or you can translate the pixel point to screen position then compare with mouse position

Discontinuity in RaycastHit.textureCoord values (getting coordinates on texture right)

Hi I'm trying to get a particular coordinate on texture (under mouse cursor). So on mouse event I'm performing:
Ray outRay = Camera.main.ScreenPointToRay(Input.mousePosition);
RaycastHit clickRayHit = new RaycastHit();
if (!Physics.Raycast(outRay, out clickRayHit))
{
return;
}
Vector2 textureCoordinate = clickRayHit.textureCoord;
Texture2D objectTexture = gameObject.GetComponent<Renderer>().material.mainTexture as Texture2D;
int xCord = (int)(objectTexture.width * textureCoordinate.x);
int yCord = (int)(objectTexture.height * textureCoordinate.y);
But the problem is that the coordinate I'm getting is not preciesly under the cursor but "somewhat near it". And it's not like coordinates are consistently shifted in one way but shifting is not random as well. They are shifted differently in different points of the texture but:
they remain somwhere in the area of real cursor
and coordinates shifted in the same way when cursor is above the same point.
Here is part of coordinates log: http://pastebin.ca/3029357
If i haven't described problem good enough I can record a short screencast.
GameObject is a Plane.
If it is relevant mouseEvent is generated by windows mouseHook. (Application specific thing)
What am I doing wrong?
UPD: I've decided to record screencast - https://youtu.be/LC71dAr_tCM?t=42. Here you can see Paint window image through my application. On the bottom left corner you can see that Paint is displaying coordinates of the mouse (I'm getting this coordinates in a way I've described earlier - location of a point on a texture). So as I move mouse cursor you can see how coordinates are changing.
UPD2:
I just want to emphasize one more time that this shift is not constant or linear. There could be "jumps" around the mouse coordinates (but not only jumps). The video above explains it better.
So it was scaling problem after all. One of the scale values were negative and was causing this behaviour.

Implementing zoom using MonoGame and two-fingers gesture

I want to implement zoom feature using two-fingers slide in/out gesture that is commonly met in games such as Angry Birds. Now i'm using slider zoom and it feels not so good as the simple gesture. So i've tried to look at the gestures implementation in MonoGame but haven't figured out what actualy can help me to achieve described beahviour.
Any help will be appreciated, thanks!
Short answer: you need to use the TouchPanel gesture functionality to detect the Pinch gesture, then process the resultant gestures.
The longer answer...
You will get multiple GestureType.Pinch gesture events per user gesture, followed by a GestureType.PinchComplete when the user releases one or both fingers. Each Pinch event will have two pairs of vectors - a current position and a position change for each touch point. To calculate the actual change for the pinch you need to back-calculate the prior positions of each touch point, get the distance between the touch points at prior and current states, then subtract to get the total change. Compare this to the distance of the original pinch touch points (the original positions of the touch points from the first pinch event) to get a total distance difference.
First, make sure you initialize the TouchPanel.EnabledGestures property to include GestureType.Pinch and optionally GestureType.PinchComplete depending on whether you want to capture the end of the user's pinch gesture.
Next, use something similar to this (called from your Game class's Update method) to process the events
bool _pinching = false;
float _pinchInitialDistance;
private void HandleTouchInput()
{
while (TouchPanel.IsGestureAvailable)
{
GestureSample gesture = TouchPanel.GetGesture();
if (gesture.GestureType == GestureType.Pinch)
{
// current positions
Vector2 a = gesture.Position;
Vector2 b = gesture.Position2;
float dist = Vector2.Distance(a, b);
// prior positions
Vector2 aOld = gesture.Position - gesture.Delta;
Vector2 bOld = gesture.Position2 - gesture.Delta2;
float distOld = Vector2.Distance(aOld, bOld);
if (!_pinching)
{
// start of pinch, record original distance
_pinching = true;
_pinchInitialDistance = distOld;
}
// work out zoom amount based on pinch distance...
float scale = (distOld - dist) * 0.05f;
ZoomBy(scale);
}
else if (gesture.GestureType == GestureType.PinchComplete)
{
// end of pinch
_pinching = false;
}
}
}
The fun part is working out the zoom amounts. There are two basic options:
As shown above, use a scaling factor to alter zoom based on the raw change in distance represented by the current Pinch event. This is fairly simple and probably does what you need it to do. In this case you can probably drop the _pinching and _pinchInitialDistance fields and related code.
Track the distance between the original touch points and set zoom based on current distance as a percentage of initial distance (float zoom = dist / _pinchInitialDistance; ZoomTo(zoom);)
Which one you choose depends on how you're handling zoom at the moment.
In either case, you might also want to record the central point between the touch points to use as the center of your zoom rather than pinning the zoom point to the center of the screen. Or if you want to get really silly with it, record the original touch points (aOld and bOld from the first Pinch event) and do translation, rotation and scaling operations to have those two points follow the current touch points.

Categories

Resources