Accurate Pinch Zoom - c#

I'm trying to figure out how to create an accurate pinch zoom for my camera in Unity3D/C#. It must be based on the physical points on the terrain. The image below illustrates the effect I want to achieve.
The Camera is a child of a null which scales (between 0,1 and 1) to "zoom" as not to mess with the perspective of the camera.
So what I've come up with so far is that each finger must use a raycast to get the A & B points as well as the current scale of the camera parent.
EG: A (10,0,2), B (14,0,4), S (0.8,0.8,0.8) >> A (10,0,2), B (14,0,4), S (0.3,0.3,0.3)
The positions of the fingers will change but the hit.point values should remain the same by changing the scale.
BONUS: As a bonus, it would be great to have the camera zoom into a point between the fingers, not just the center.
Thanks so much for any help or reference.
EDIT:
I've come up with this below so far but it's not accurate the way I want. It incorporates some of the ideas I had above and I think that the problem is that it shouldn't be /1000 but an equation including the current scale somehow.
if (Input.touchCount == 2) {
if (!CamZoom) {
CamZoom = true;
var rayA = Camera.main.ScreenPointToRay (Input.GetTouch (0).position);
var rayB = Camera.main.ScreenPointToRay (Input.GetTouch (1).position);
int layerMask = (1 << 8);
if (Physics.Raycast (rayA, out hit, 1500, layerMask)) {
PrevA = new Vector3 (hit.point.x, 0, hit.point.z);
Debug.Log ("PrevA: " + PrevA);
}
if (Physics.Raycast (rayB, out hit, 1500, layerMask)) {
PrevB = new Vector3 (hit.point.x, 0, hit.point.z);
Debug.Log ("PrevB: " + PrevB);
}
PrevDis = Vector3.Distance (PrevB, PrevA);
Debug.Log ("PrevDis: " + PrevDis);
PrevScaleV = new Vector3 (PrevScale, PrevScale, PrevScale);
PrevScale = this.transform.localScale.x;
Debug.Log ("PrevScale: " + PrevScale);
}
if (CamZoom) {
var rayA = Camera.main.ScreenPointToRay (Input.GetTouch (0).position);
var rayB = Camera.main.ScreenPointToRay (Input.GetTouch (1).position);
int layerMask = (1 << 8);
if (Physics.Raycast (rayA, out hit, 1500, layerMask)) {
NewA = new Vector3 (hit.point.x, 0, hit.point.z);
}
if (Physics.Raycast (rayB, out hit, 1500, layerMask)) {
NewB = new Vector3 (hit.point.x, 0, hit.point.z);
}
DeltaDis = PrevDis - (Vector3.Distance (NewB, NewA));
Debug.Log ("Delta: " + DeltaDis);
NewScale = PrevScale + (DeltaDis / 1000);
Debug.Log ("NewScale: " + NewScale);
NewScaleV = new Vector3 (NewScale, NewScale, NewScale);
this.transform.localScale = Vector3.Lerp(PrevScaleV,NewScaleV,Time.deltaTime);
PrevScaleV = NewScaleV;
CamAngle();
}
}

Intro
I had to solve this same problem recently and started off with the same approach as you, which is to think of it as though the user is interacting with the scene and we need to figure out where in the scene their fingers are and how they're moving and then invert those actions to reflect them in our camera.
However, what we're really trying to achieve is much simpler. We simply want the to user feel like the area of the screen that they are pinching changes size with the same ratio as their pinch changes.
Aim
First let's summarise our goal and constraints:
Goal: When a user pinches, the pinched area should appear to scale to match the pinch.
Constraint: We do not want to change the scale of any objects.
Constraint: Our camera is a perspective camera.
Constraint: We do not want to change the field of view on the camera.
Constraint: Our solution should be resolution/device independent.
With all that in mind, and given that we know that with a perspective camera objects appear larger when they're closer and smaller when they're further, it seems that the only solution for scaling what the user sees is to move the camera in/out from the scene.
Solution
In order to make the scene look larger at our focal point, we need to position the camera so that a cross-section of the camera's frustum at the focal point is equivalently smaller.
Here's a diagram to better explain:
The top half of the image is the "illusion" we want to achieve of making the area the user expands twice as big on screen. The bottom half of the image is how we need to move the camera to position the frustum in a way that gives that impression.
The question then becomes how far do I move the camera to achieve the desired cross-section?
For this we can take advantage of the relationship between the frustum's height h at a distance d from the camera when the camera's field of view angle in degrees is θ.
Since our field of view angle θ is constant per our agreed constraints, we can see that h and d are linearly proportional.
This is useful to know because it means that any multiplication/division of h is equally reflected in d. Meaning we can just apply our multipliers directly to the distance, no extra calculation to convert height to distance required!
Implementation
So we finally get to the code.
First, we take the user's desired size change as a multiple of the previous distance between their fingers:
Touch touch0 = Input.GetTouch(0);
Touch touch1 = Input.GetTouch(1);
Vector2 prevTouchPosition0 = touch0.position - touch0.deltaPosition;
Vector2 prevTouchPosition1 = touch1.position - touch1.deltaPosition;
float touchDistance = (touch1.position - touch0.position).magnitude;
float prevTouchDistance = (prevTouchPosition1 - prevTouchPosition1).magnitude;
float touchChangeMultiplier = touchDistance / prevTouchDistance;
Now we know by how much the user wants to scale the area they're pinching, we can scale the camera's distance from its focal point by the opposite amount.
The focal point is the intersection of the camera's forward ray and the thing you're zooming in on. For the sake of a simple example, I'll just be using the origin as my focal point.
Vector3 focalPoint = Vector3.zero;
Vector3 direction = camera.transform.position - focalPoint;
float newDistance = direction.magnitude / touchChangeMultiplier;
camera.transform.position = newDistance * direction.normalized;
camera.transform.LookAt(focalPoint);
That's all there is to it.
Bonus
This answer is already very long. So to briefly answer your question about making the camera focus on where you're pinching:
When you first detect a 2 finger touch, store the screen position and related world position.
When zooming, move the camera to put the world position back at the same screen position.

This is a small example:
if (_Touches.Length == 2)
{
Vector2 _CameraViewsize = new Vector2(_Camera.pixelWidth, _Camera.pixelHeight);
Touch _TouchOne = _Touches[0];
Touch _TouchTwo = _Touches[1];
Vector2 _TouchOnePrevPos = _TouchOne.position - _TouchOne.deltaPosition;
Vector2 _TouchTwoPrevPos = _TouchTwo.position - _TouchTwo.deltaPosition;
float _PrevTouchDeltaMag = (_TouchOnePrevPos - _TouchTwoPrevPos).magnitude;
float _TouchDeltaMag = (_TouchOne.position - _TouchTwo.position).magnitude;
float _DeltaMagDiff = _PrevTouchDeltaMag - _TouchDeltaMag;
_Camera.transform.position += _Camera.transform.TransformDirection((_TouchOnePrevPos + _TouchTwoPrevPos - _CameraViewsize) * _Camera.orthographicSize / _CameraViewsize.y);
_Camera.orthographicSize += _DeltaMagDiff * _OrthoZoomSpeed;
_Camera.orthographicSize = Mathf.Clamp(_Camera.orthographicSize, _MinZoom, _MaxZoom) - 0.001f;
_Camera.transform.position -= _Camera.transform.TransformDirection((_TouchOne.position + _TouchTwo.position - _CameraViewsize) * _Camera.orthographicSize / _CameraViewsize.y);
}
In the second video of this tutorial explains it

Related

Place objects based on coordinates, not on the central coordinates in Unity

I'm creating mesh objects dynamically based on the screen.
So objects that contain mesh objects are always the same size, but mesh objects have different shapes and sizes.
I want you to see my picture and understand it. In fact, blue area is transparent.
I am currently using a mobile camera to shoot Ray at the floor, and I want to place the object at the point where the Ray has hitted.
But this seems to require a lot of calculations.
I think we should use other coordinates than the object's central coordinates first.
And I think we should place the object a little bit above the collision point. Half the size of the mesh object,
So I tried this, but I failed. How can I solve this?
Below is my source code.
Vector3 hitPositon = hit.Pose.position;
Vector3 meshObjectCenter = ObjectPrefab.GetComponent<Renderer>().bounds.center;
Vector3 meshObjectSize = ObjectPrefab.GetComponent<Renderer>().bounds.size;
Vector3 CenterPointRevision = meshObjectCenter - hitPositon;
Vector3 YAxisRevision = new Vector3(0, meshObjectSize.y / 2, 0);
Vector3 NewPoint = ARObjectPrefab.transform.position - CenterPointRevision + YAxisRevision;
ObjectPrefab.transform.position = NewPoint;
Object is in this format, and the picture above looks successful but fail case.
The position is just the hit location minus the offset to center plus the y-axis offset:
Vector3 hitPositon = hit.Pose.position;
Vector3 meshObjectCenter = ObjectPrefab.GetComponent<Renderer>().bounds.center;
Vector3 meshObjectSize = ObjectPrefab.GetComponent<Renderer>().bounds.size;
Vector3 YAxisRevision = new Vector3(0, meshObjectSize.y / 2, 0);
ObjectPrefab.transform.position = hitPositon - meshObjectCenter + YAxisRevision;

Gyroscope with compass help needed

I need to have a game object point north AND I want to combine this with gyro.attitude input. I have tried, unsuccessfully, to do this in one step. That is, I couldn't make any gyro script, which I found on the net, work with the additional requirement of always pointing north. Trust me, I have tried every script I could find on the subject. I deduced that it's impossible and probably was stupid to think it could be done; at least not this way (i.e. all-in-one). I guess you could say I surmised that you can't do two things at once. Then I thought possibly I could get the same effect by breaking-up the duties. That is, a game object that always points north via the Y axis. Great, got that done like this:
_parentDummyRotationObject.transform.rotation = Quaternion.Slerp(_parentDummyRotationObject.transform.rotation, Quaternion.Euler(0, 360 - Input.compass.trueHeading, 0), Time.deltaTime * 5f);
And with the game object pointing north on the Y, I wanted to add the second game-object, a camera in this case, with rotation using gyro input on the X and Z axis. The reason I have to eliminate the Y axes on the camera is because I get double rotation. With two things rotating at once (i.e. camera and game-object), a 180 degree rotation yielded 360 in the scene. Remember I need the game object to always point north (IRL) based on the device compass. If my device is pointing towards the East, then my game-object would be rotated 90 degrees in the unity scene as it points (rotation) towards the north.
I have read a lot about gyro camera controllers and one thing I see mentioned a lot is you shouldn't try to do this (limit it) on just 1 or 2 axis, when using Quaternions it's impossible when you don't know what you're doing, which I clearly do not.
I have tried all 3 solutions from this solved question: Unity - Gyroscope - Rotation Around One Axis Only and each has failed to rotate my camera on 1 axis to satisfy my rotational needs. Figured I'd try getting 1 axis working before muddying the waters with the 2nd axis. BTW, my requirements are simply that the camera should only rotate on 1 axis (in any orientation) based on the X axis of my device. If I could solve for X, then I thought it'd be great to get Z gyro input to control the camera as well. So far I cannot get the camera controlled on just 1 axis (X). Anyway, here are my findings...
The first solution, which used Input.gyro.rotationRateUnbiased, was totally inaccurate. That is, if I rotated my device around a few times and then put my phone/device down on my desk, the camera would be in a different rotation/location each time. There was no consistency. Here's my code for the first attempt/solution:
<code>
private void Update()
{
Vector3 previousEulerAngles = transform.eulerAngles;
Vector3 gyroInput = Input.gyro.rotationRateUnbiased;
Vector3 targetEulerAngles = previousEulerAngles + gyroInput * Time.deltaTime * Mathf.Rad2Deg;
targetEulerAngles.y = 0.0f;
targetEulerAngles.z = 0.0f;
transform.eulerAngles = targetEulerAngles;
}
</code>
The second solution was very consistent in that I could rotate my device around and then put it down on the desk and the unity camera always ended up in the same location/rotation/state so-to-speak. The problem I had was the camera would rotate on the one axis (X in this case), but it did so when I rotated my device on either the y or x axis. Either type of rotation/movement of my phone caused the unity camera to move on the X. I don't understand why the y rotation of my phone caused the camera to rotate on X. Here is my code for solution #2:
private void Start()
{
Input.gyro.enabled = true;
startEulerAngles = transform.eulerAngles;
startGyroAttitudeToEuler = Input.gyro.attitude.eulerAngles;
}
private void Update()
{
Vector3 deltaEulerAngles = Input.gyro.attitude.eulerAngles - startGyroAttitudeToEuler;
deltaEulerAngles.y = 0.0f;
deltaEulerAngles.z = 0.0f;
transform.eulerAngles = startEulerAngles - deltaEulerAngles;
}
The 3rd solution: I wasn't sure how to complete this last solution, so it never really worked. With the 2 axis zeroed-out, the camera just flipped from facing left to right and back, or top to bottom and back; depending on which axis were commented out. If none of the axis were commented-out (like the original solution) the camera would gyro around on all axis. Here's my code for attempt #3:
private void Start()
{
_upVec = Vector3.zero;
Input.gyro.enabled = true;
startEulerAngles = transform.eulerAngles;
}
private void Update()
{
Vector3 gyroEuler = Input.gyro.attitude.eulerAngles;
phoneDummy.transform.eulerAngles = new Vector3(-1.0f * gyroEuler.x, -1.0f * gyroEuler.y, gyroEuler.z);
_upVec = phoneDummy.transform.InverseTransformDirection(-1f * Vector3.forward);
_upVec.z = 0;
// _upVec.x = 0;
_upVec.y = 0;
transform.LookAt(_upVec);
// transform.eulerAngles = _upVec;
}
Originally I thought it was my skills, but after spending a month on this I'm beginning to think that this is impossible to do. But that just can't be. I know it's a lot to absorb, but it's such a simple concept.
Any ideas?
EDIT: Thought I'd add my hierarchy:
CameraRotator (parent with script) -> MainCamera (child)
CompassRotator (parent) -> Compass (child with script which rotates parent)
I'd do this in the following way:
Camara with default 0, 0, 0 rotation
Screenshot
Object placed at the center of the default position of the camera.
Script for the Camera:
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class NewBehaviourScript : MonoBehaviour
{
Camera m_MainCamera;
// Start is called before the first frame update
void Start()
{
// Disable the sleep timeout during gameplay.
// You can re-enable the timeout when menu screens are displayed as necessary.
Screen.sleepTimeout = SleepTimeout.NeverSleep;
// Enable the gyroscope.
if (SystemInfo.supportsGyroscope)
{
Input.gyro.enabled = true;
}
m_MainCamera = Camera.main;
m_MainCamera.enabled = true;
}
// Update is called once per frame
void Update()
{
if (m_MainCamera.enabled)
{
// First - Grab the Gyro's orientation.
Quaternion tAttitude = Input.gyro.attitude;
// The Device uses a 'left-hand' orientation, we need to transform it to 'right-hand'
Quaternion tGyro = new Quaternion(tAttitude.x, tAttitude.y, -tAttitude.z, -tAttitude.w);
// the gyro attitude is tilted towards the floor and upside-down reletive to what we want in unity.
// First Rotate the orientation up 90deg on the X Axis, then 180Deg on the Z to flip it right-side up.
Quaternion tRotation = Quaternion.Euler(-90f, 0, 0) * tGyro;
tRotation = Quaternion.Euler(0, 0, 180f) * tRotation;
// You can now apply this rotation to any unity camera!
m_MainCamera.transform.localRotation = tRotation;
}
}
}
With this script my Object always face SOUTH no matter what.
If you want the object to face NORTH you just have to turn the view 180º on the Y axis as a last rotation:
Quaternion tRotation = Quaternion.Euler(-90f, 0, 0) * tGyro;
tRotation = Quaternion.Euler(0, 0, 180f) * tRotation;
//Face NORTH:
tRotation = Quaternion.Euler(0,180f, 0) * tRotation;
Hope this might help ;)

Show object in front of the player always

I have stuck in this simple problem but unable to understand that why i am unable to control it.
I have these line of code which is displaying my canvas object in front of my player(camRotationToWatch object name in code) at certain rotation of the player.
if (camRotationToWatch.transform.localEulerAngles.x >= navigationCanvasXMinmumLimit && camRotationToWatch.transform.localEulerAngles.x <= navigationCanvasXMaximumLimit)
{
if (!navCanvasHasDisplay)
{
navigationCanvas.SetActive(true);
//Debug.Log(camRotationToWatch.transform.forward);
Vector3 navCanvas = camRotationToWatch.transform.position + camRotationToWatch.transform.forward * navCanvasDisplayDistanceFromCam;
navCanvas = new Vector3(navCanvas.x, 2f, navCanvas.z);
navigationCanvas.transform.position = new Vector3(navCanvas.x, navCanvas.y, navCanvas.z);
navigationCanvas.transform.rotation = camRotationToWatch.transform.rotation;
navCanvasHasDisplay = true;
}
}
else
{
//navigationCanvas.SetActive(false);
if (locationPanel.activeSelf == false && infoPanel.activeSelf == false) {
navigationCanvas.SetActive(false);
navCanvasHasDisplay = false;
}
}
This code is actually work fine when camRotationToWatch object rotate from down to up and Canvas show at correct position but as I try to to rotate camRotationToWatch from up to down it display(active) Canvas at very top position. How can I restrict canvas to show at same position (No matter player rotate from up to down or down to up) but display on front of the player object?
Kinda hard trying to figure out what exactly you want to do. But this did what I think you where trying to do
public GameObject follow; // The object you want to rotate around
public float distance = 2; // Distance to keep from object
private void Update() {
Vector3 forward = follow.transform.forward;
forward.y = 0; // This will result in Vector3.Zero if looking straight up or down. Carefull
transform.position = follow.transform.position + forward * distance;
transform.rotation = Quaternion.LookRotation(forward, Vector3.up);
}
I believe your "unexpected behavior" is due to the use of euler angles since they are not always entirely predictable. Try using Quaternions or Vector3.Angle() when possible.
If you want to limit the angle (say... if looking down or up more than 45° disable the object) you could do the following:
if (Vector3.Angle(forward, follow.transform.forward) > maxAngle) { ... }
This probably isn't a complete answer but something that might help. This line:
Vector3 navCanvas = camRotationToWatch.transform.position + camRotationToWatch.transform.forward * navCanvasDisplayDistanceFromCam;
You are creating a position at a fixed distance from camRotationToWatch. But if that object is looking up or down, that position is not horizontally at navCanvasDisplayDistanceFromCam. If it's looking straight up, then this position is in fact directly above.
So when you do this to set a fixed vertical height:
navCanvas = new Vector3(navCanvas.x, 2f, navCanvas.z);
you aren't getting the distance from camRotationToWatch that you think you are.
Instead of using camRotationToWatch.transform.forward, create a vector from it and zero out the Y component, and normalize before using it to offset the position. (You will need to watch out for zero length vectors with that though).
Whether that fixes your problem or not, it's too hard to guess but it should help improve your results some.
EDIT: Here is an example of how you can avoid the issue with the canvas being too close:
Vector3 camForward = camRotationToWatch.transform.forward;
camForward.y = 0;
if (camForward.magnitude == 0)
{
//TODO: you'll need to decide how to deal with a straight up or straight down vector
}
camForward.Normalize();
//Note: now you have a vector lying on the horizontal plane, pointing in
//the direction of camRotationToWatch
Vector3 navCanvas = camRotationToWatch.transform.position + camForward *
navCanvasDisplayDistanceFromCam;
//Note: if you want the canvas to be at the player's height (or some offset from),
//use the player's y
navCanvas = new Vector3(navCanvas.x, Player.transform.y, navCanvas.z);
navigationCanvas.transform.position = navCanvas;
Again, this might not fix all your issues but will help to ensure your canvas lies at a set distance horizontally from the Player and will also compensate for the player's up and down motion.

Stop sprite from bouncing

I am making a simple puzzle game on the Unity platform. First, I tested some simple physics on scratch and then wrote it in C#. But for some reason, some of the code isn't producing the same effect. In scratch, the code makes the sprite do exactly what I want; go towards the mouse, slow down, then stop on the mouse. However, (what I presume to be) the same code in Unity makes the sprite go crazy and never stop.
UPDATE
It definitely is cause #2. It turns out the sprite is bouncing off other bounding boxes, causing the velocity to change drastically. It would work with smaller values, except I have gravity. My new question is there any way to stop the sprite from bouncing?
Vector2 CalcHookPull(){
//code for finding which hook and force
//find if mouse pressed
if (Input.GetMouseButton(0))
{
//check if new hook needs to be found
if (!last){
Vector2 mypos = new Vector2(transform.position.x, transform.position.y);
Vector2 relmousepos = new Vector2(Camera.main.ScreenToWorldPoint(Input.mousePosition).x-transform.position.x, Camera.main.ScreenToWorldPoint(Input.mousePosition).y-transform.position.y);
//player is on layer 8, don't want to check itself
int layermask = ~(1<<8);
RaycastHit2D hit = Physics2D.Raycast(mypos, relmousepos, Mathf.Infinity, layermask);
maybeHook = hit.collider.gameObject;
}
if(maybeHook.tag == "hook"){
float hookx = maybeHook.transform.position.x;
float hooky = maybeHook.transform.position.y;
float myx = transform.position.x;
float myy = transform.position.y;
//elasticity = 30
float chx = (hookx - myx) / (5f + 1f/3f);
float chy = (hooky - myy) / (5f + 1f/3f);
float curchx = GetComponent<Rigidbody2D> ().velocity.x;
float curchy = GetComponent<Rigidbody2D> ().velocity.y;
Vector2 toChange = new Vector2 (curchx * 0.3f + chx, curchy * 0.3f + chy);
Debug.Log(toChange);
last = Input.GetMouseButton (0);
return toChange;
} else{
last = Input.GetMouseButton (0);
return new Vector2(0,0);
}
} else {
last = Input.GetMouseButton (0);
return new Vector2 (0, 0);
}
}
Possible differences:
Vectors work differently that xy changes (doubtful)
The bounding box is not allowing the sprite to go all the way to the center of the hook object. (likely, but don't know how to fix)
I copied the code wrong (unlikely)
Any help is appreciated, thanks!
I was being stupid. There is already something for this. Should have thought it through more. http://docs.unity3d.com/Manual/class-PhysicsMaterial2D.html

mouse picking a rotating sprite in an Xna game

I am working on a simple game where you click on square sprites before they disappear. I decided to get fancy and make the squares rotate. Now, when I click on the squares, they don't always respond to the click. I think that I need to rotate the click position around the center of the rectangle(square) but I am not sure how to do this. Here is my code for the mouse click:
if ((mouse.LeftButton == ButtonState.Pressed) &&
(currentSquare.Contains(mouse.X , mouse.Y )))
And here is the rotation logic:
float elapsed = (float)gameTime.ElapsedGameTime.TotalSeconds;
RotationAngle += elapsed;
float circle = MathHelper.Pi * 2;
RotationAngle = RotationAngle % circle;
I am new to Xna and programming in general, so any help is appreciated.
Thanks a lot,
Bill
So you're trying to determine if a point is in a rectangle, but when the rectangle is rotated?
The Contains() method will only work if the current rotation is 0 (I guess currentSquare is a rectangle representing the image position without rotation?).
What you will have to do is do the opposite rotation of the image on the mouse coordinates (the mouse coordinates should rotate around the origin of your image), then calculate if the new position is within currentSquare. You should be able to do all of this using vectors.
(Untested)
bool MouseWithinRotatedRectangle(Rectangle area, Vector2 tmp_mousePosition, float angleRotation)
{
Vector2 mousePosition = tmp_mousePosition - currentSquare.Origin;
float mouseOriginalAngle = (float)Math.Atan(mousePosition.Y / mousePosition.X);
mousePosition = new Vector2((float)(Math.Cos(-angleRotation + mouseOriginalAngle) * mousePosition.Length()),
(float)(Math.Sin(-angleRotation + mouseOriginalAngle) * , mousePosition.Length()));
return area.Contains(mousePosition);
}
If you dont need pixel pefect detection you can create bounding sphere for each piece like this.
var PieceSphere = new BoundingSphere()
{
Center =new Vector3(new Vector2(Position.X + Width/2, Position.Y + Height/2), 0f),
Radius = Width / 2
};
Then create another bounding sphere around mouse pointer.For position use mouse coordinates and for radius 1f. Because mouse pointer will be moving it will change its coordinates so you have to also update the sphere's center on each update.
Checking for clicks would be realy simple then.
foreach( Piece p in AllPieces )
{
if ((mouse.LeftButton == ButtonState.Pressed) && p.BoundingSphere.Intersects(MouseBoundingSphere))
{
//Do stuff
}
}
If you are lazy like me you could just do a circular distance check.
Assuming mouse and box.center are Vector2
#gets us C^2 according to the pythagorean Theorem
var radius = (box.width / 2).squared() + (box.height / 2).square
#distance check
(mouse - box.center).LengthSquared() < radius
Not perfectly accurate but the user would have a hard time noticing and inaccuracies that leave a hitbox slightly too large are always forgiven. Not to mention the check is incredibly fast just calculate the radius when the square is created.

Categories

Resources