i'm currently programming an 2D topview unity game. And i want to set the camera such as, that just a specific area is visible. That means that i know the size of my area and when the camera, which is following the player currently reaches the border of the area, i want to visible stops.
So here is my question: i know where the camera is and how it can follow the player but i dont know how i can calculate the distance between the border of the field and the border of waht the camera sees. how can i do that?
Essentially, treat your playable area as a rectangle. Then, make a smaller rectangle within that rectangle that accounts for the camera's orthographic size. Don't forget to include the aspect ratio of your camera when calculating horizontal bounds.
Rect myArea; // this stores the bounds of your playable area
Camera cam; // this is your orthographic camera, probably Camera.main
GameObject playerObject; // this is your player
float newX = Mathf.Clamp(
playerObject.transform.position.x,
myArea.xMin + cam.orthographicSize * cam.aspect,
myArea.xMax - cam.orthographicSize * cam.aspect
);
float newY = Mathf.Clamp(
playerObject.transform.position.y,
myArea.yMin + cam.orthographicSize,
myArea.yMax - cam.orthographicSize
);
cam.transform.position = new Vector3(newX,newY,cam.transform.position.z);
If you're using an alternative plane (say xz instead of xy), just swap out the corresponding dimensions in all the calculations.
Related
Alright, I cant find any example of this already done and my attempt is yielding odd results - I need to drag to resize a flattened cube (like a plane, but must have thickness so it was a cube) using a handle in the corner. Like a window on your desktop.
So far Ive created my handle plane and gotten it via script attached to my cube plane. The cube plane and the handle are children of an empty to which the scale is being applied so that the cube plane scales from left to right as desired:
That works, however my attempt at using the delta scale handle position to scale the parent empty scales either way too much or in odd directions:
Awake() {
scaleHandle = GameObject.FindGameObjectWithTag("ScaleHandle");
scaleHandleInitialPos = scaleHandle.transform.localPosition;
}
Update()
{
width += -1*(scaleHandleInitialPos.x - scaleHandle.transform.localPosition.x);
height += -1 * (scaleHandleInitialPos.y - scaleHandle.transform.localPosition.y);
transform.parent.localScale = new Vector3(width, height, thickness);
}
what is wrong here?
Hierarchy:
Transform of the child:
With updating initialPos every Update() OR doing width = scaleHandle.transform.localPosition.x / scaleHandleInitialPos.x; where the white square is the scaleHandle
I'm in need of calculate the correct position of a 3D object which I'm displaying over a RawImage in my UI. If I put my UI image in the center of the screen I can calculate it perfectly, if I move my image around my UI canvas I'm getting results which are off by an offset, depending on the side I'm moving. I've taken a couple of screenshot of what I mean, the orange side is my 3D quad, the white square is just an image debugging where my point should be calculated.
The setup I have is this:
- a world camera pointing on a 3D quad
- a ui canvas with a dedicated perspective camera (Screen space - Camera)
- a panel in my ui canvas displaying the 3D quad
The code:
var worldPoint = t.Value.MeshRenderer.bounds.min; //t.Value is the 3D quad
var screenPoint = worldCamera.WorldToScreenPoint(worldPoint);
screenPoint.z = (baseCanvas.transform.position - uiCamera.transform.position).magnitude; //baseCanvas is the UI canvas
var pos = uiCamera.ScreenToWorldPoint(screenPoint);
debugger.transform.position = pos; //debugger is just the square image used to see where my calculated point is landing
I've tried multiple ways, like this:
var screenPoint = worldCamera.WorldToScreenPoint(t.Value.MeshRenderer.bounds.min);
Vector2 localPoint;
RectTransformUtility.ScreenPointToLocalPointInRectangle(rectTransform, screenPoint, uiCamera, out localPoint); //rectTransform is my UI panel
debugger.transform.localPosition = localPoint
But I always get the same result, how can I make the correct calculation considering the offset?
I am using this code:
void Update()
{
accel = Input.acceleration.x;
transform.Translate (0, 0, accel);
transform.position = new Vector3 (Mathf.Clamp (transform.position.x,-6.9f, 6.9f), -4.96f, 18.3f);
}
It makes my gameObject move the way I want it BUT the problem is that when I put the app on my phone, (-6.9) and (6.9) are not the "ends" of my screen. And I cannot figure out how to change those values according to every screen size?
This is going to be a bit of a longer answer, so please bear with me here.
Note that this post is assuming that you are using an orthographic camera - the formula used won't work for perspective cameras.
As far as I can understand your desire is to keep your object inside of the screen boundaries. Screen boundaries in Unity are determined by a combination of camera size and screen size.
float height = Camera.main.orthographicSize * 2;
A camera's orthographic size determines the half of the screen's size in world units. Multiplying this value results in the amount of world units between the top and bottom of the screen.
To get the width from this value, we take the value and multiplying it by the screen's width divided by the screen's height.
float width = height * Screen.width / Screen.height;
Now we have the dimensions of our screen, but we still need to keep the object inside those bounds.
First, we create an instance of the type Bounds, which we will use to determine the maximum and minimum values for the position.
Bounds bounds = new Bounds (Vector3.zero, new Vector3(width, height, 0));
Note that we used Vector3.zero since the center of our bounds instance should be the world's center. This is the center of the area that your object should be able to move inside.
Lastly, we clamp the object's position values, according to our resulting bounds' properties.
Vector3 clampedPosition = transform.position;
clampedPosition.x = Mathf.Clamp(clampedPosition.x, bounds.min.x, bounds.max.x);
transform.position = clampedPosition;
This should ensure that your object will never leave the screen's side boundaries!
I am using the bounds.extents to represent the radius of a sprite in Unity. In my simulation I am changing the size of the sprite using transform.localScale. When I want to spawn new sprites I want to spawn them so that the radius won't exceed my ground (represented as a plane). Thus I am making sure that the new sprite is not spawned within a range of bounds.extents to the edge of the plane. But when the sprites reaches their maximum radius they exceed the edge of the plane. So my question is, what is the relation between bounds.extents and transform.localScale?
You have to make sure that the radius you allow the sprites to be placed in is less than the extents of the plane and the half the size of the sprite. because when you place the sprite on the edge of the radius, the center of it is on the borders of the plane, so half of it is outside it. Did I understand the problem correctly?
As for the relation, the bounds.extents describes half the size of the sprite in units, while transform.localscale is the scale relative to the object's parent's scale. It is also an indication of the current size compared to the original size of the sprite, it doesn't indicate the size in units.
So assuming the parent's scale is 1:
bounds.extents = (original bounds.extents) * transform.localScale
I'm trying to create an isometric (35 degrees) view by using a camera.
I'm drawing a triangle which rotates around Z axis.
For some reason the triangle is being cut at a certain point of the rotation
giving this result
I calculate the camera position by angle and z distance
using this site: http://www.easycalculation.com/trigonometry/triangle-angles.php
This is how I define the camera:
// isometric angle is 35.2ยบ => for -14.1759f Y = 10 Z
Vector3 camPos = new Vector3(0, -14.1759f, 10f);
Vector3 lookAt = new Vector3(0, 0, 0);
viewMat = Matrix.CreateLookAt(camPos, lookAt, Vector3.Up);
//projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, Game.GraphicsDevice.Viewport.AspectRatio, 1, 100);
float width = GameInterface.vpMissionWindow.Width;
float height = GameInterface.vpMissionWindow.Height;
projMat = Matrix.CreateOrthographic(width, height, 1, 1000);
worldMat = Matrix.Identity;
This is how I recalculate the world matrix rotation:
worldMat = Matrix.CreateRotationZ(3* visionAngle);
// keep triangle arround this Center point
worldMat *= Matrix.CreateTranslation(center);
effect.Parameters["xWorld"].SetValue(worldMat);
// updating rotation angle
visionAngle += 0.005f;
Any idea what I might be doing wrong? This is my first time working on a 3D project.
Your triangle is being clipped by the far-plane.
When your GPU renders stuff, it only renders pixels that fall within the range (-1, -1, 0) to (1, 1, 1). That is: between the bottom left of the viewport and the top right. But also between some "near" plane and some "far" plane on the Z axis. (As well as doing clipping, this also determines the range of values covered by the depth buffer.)
Your projection matrix takes vertices that are in world or view space, and transforms them so that they fit inside that raster space. You may have seen the standard image of a view frustum for a perspective projection matrix, that shows how the edges of that raster region appear when transformed back into world space. The same thing exists for orthographic projections, but the edges of the view region are parallel.
The simple answer is to increase the distance to your far plane so that all your geometry falls within the raster region. It is the fourth parameter to Matrix.CreateOrthographic.
Increasing the distance between your near and far plane will reduce the precision of your depth buffer - so avoid making it any bigger than you need it to be.
I think your far plane is cropping it, so you should make bigger the projection matrix far plane argument...
projMat = Matrix.CreateOrthographic(width, height, 1, 10000);