I am using this code:
void Update()
{
accel = Input.acceleration.x;
transform.Translate (0, 0, accel);
transform.position = new Vector3 (Mathf.Clamp (transform.position.x,-6.9f, 6.9f), -4.96f, 18.3f);
}
It makes my gameObject move the way I want it BUT the problem is that when I put the app on my phone, (-6.9) and (6.9) are not the "ends" of my screen. And I cannot figure out how to change those values according to every screen size?
This is going to be a bit of a longer answer, so please bear with me here.
Note that this post is assuming that you are using an orthographic camera - the formula used won't work for perspective cameras.
As far as I can understand your desire is to keep your object inside of the screen boundaries. Screen boundaries in Unity are determined by a combination of camera size and screen size.
float height = Camera.main.orthographicSize * 2;
A camera's orthographic size determines the half of the screen's size in world units. Multiplying this value results in the amount of world units between the top and bottom of the screen.
To get the width from this value, we take the value and multiplying it by the screen's width divided by the screen's height.
float width = height * Screen.width / Screen.height;
Now we have the dimensions of our screen, but we still need to keep the object inside those bounds.
First, we create an instance of the type Bounds, which we will use to determine the maximum and minimum values for the position.
Bounds bounds = new Bounds (Vector3.zero, new Vector3(width, height, 0));
Note that we used Vector3.zero since the center of our bounds instance should be the world's center. This is the center of the area that your object should be able to move inside.
Lastly, we clamp the object's position values, according to our resulting bounds' properties.
Vector3 clampedPosition = transform.position;
clampedPosition.x = Mathf.Clamp(clampedPosition.x, bounds.min.x, bounds.max.x);
transform.position = clampedPosition;
This should ensure that your object will never leave the screen's side boundaries!
Related
Say I have a GameObject inside a 2D scene and a camera. And I want to change the size and position of the camera so that even when the screen resolution changes, the object will still be visible. So, how can I do that?
TL;DR: Scroll down to the bottom for the code.
First up, we must set the position of the camera to the middle of the object so the scaling of the camera would be easier.
Second, to scale the camera, we're going to change the orthographicSize of the camera in our script (Which is the Size attribute in the Camera component). But how do we calculate the attribute?
Basically, the Size attribute here is half the height of the camera. So, for example, if we set the Size to 5, that mean the height of camera going to be 10 Unity Unit (which is something I made up so you can understand easier).
So, it seems like we just have to get the height of the object, divide it by 2 and set the Size of the camera to the result, right? (1)
Well, not really. You see, while it might work on certain cases, when the width of the object is way, way longer than the screen, and the height if way, way shorter, then the camera would not able to see all of the object.
But why is that? Now, let's say that our camera has a width/height of 16/9, and our object is 100/18. That means if we scale using the height, our camera's width/height would be 32/18, and while it's enough to cover the height, it isn't enough to cover the width. So, another approach is to calculate using the width
By taking the width of the object, divide it by the width of the camera and then multiply with the height of the camera (then of course, divide by 2). We would be able to fit the whole width of the object. (Because of... ratio or something) (2)
BUT AGAIN, it has the same problem as our first approach, but the object being too tall instead too wide.
So, to solve this, we just have to place a check if the first approach (see (1)) if the object is being overflowed, and if it is, then we just use the second approach instead (see (2)). And that's it
And here's the code btw:
// replace the `cam` variable with your camera.
float w = <the_width_of_object>;
float h = <the_height_of_object>;
float x = w * 0.5f - 0.5f;
float y = h * 0.5f - 0.5f;
cam.transform.position = new Vector3(x, y, -10f);
cam.orthographicSize = ((w > h * cam.aspect) ? (float)w / (float)cam.pixelWidth * cam.pixelHeight : h) / 2;
// to add padding, just plus the result of the `orthographicSize` calculation with number, like this:
// | |
// V V
// ... cam.pixelHeight : h) / 2 + 1
Alright, I cant find any example of this already done and my attempt is yielding odd results - I need to drag to resize a flattened cube (like a plane, but must have thickness so it was a cube) using a handle in the corner. Like a window on your desktop.
So far Ive created my handle plane and gotten it via script attached to my cube plane. The cube plane and the handle are children of an empty to which the scale is being applied so that the cube plane scales from left to right as desired:
That works, however my attempt at using the delta scale handle position to scale the parent empty scales either way too much or in odd directions:
Awake() {
scaleHandle = GameObject.FindGameObjectWithTag("ScaleHandle");
scaleHandleInitialPos = scaleHandle.transform.localPosition;
}
Update()
{
width += -1*(scaleHandleInitialPos.x - scaleHandle.transform.localPosition.x);
height += -1 * (scaleHandleInitialPos.y - scaleHandle.transform.localPosition.y);
transform.parent.localScale = new Vector3(width, height, thickness);
}
what is wrong here?
Hierarchy:
Transform of the child:
With updating initialPos every Update() OR doing width = scaleHandle.transform.localPosition.x / scaleHandleInitialPos.x; where the white square is the scaleHandle
Overview
I've been looking around for a while and haven't found an answer so hopefully the community here can help me out. I am re-working my look-at camera (written pre 2000) and am having trouble getting rid of an issue where the look-at and up vectors become aligned causing the camera to spin wildly out of control. I originally understood this to be gimbal lock, but now I'm not so sure of that.
From my understanding of gimbal lock, when pitch becomes aligned with roll, pitch becomes roll; and in essence this is what it appears to be, but the problem is that the rate of change shouldn't increase just because the axes become aligned, I should just get a smooth roll. Instead I get a violent roll in which I can't really tell which way the roll is going.
Updating the Camera's Position
When the user moves the mouse I move the camera based on the mouse's X and Y coordinates:
Vector2 mousePosition = new Vector2(e.X, e.Y);
Vector2 delta = mousePosition - mouseSave;
mouseSave = mousePosition;
ShiftOrbit(delta / moveSpeed);
Within the ShiftOrbit method, I calculate the new position based on the look-at, right, and up vectors in relationship to the delta passed from the mouse event above:
Vector3 lookAt = Position - Target;
Vector3 right = Vector3.Normalize(Vector3.Cross(lookAt, Up));
Vector3 localUp = Vector3.Normalize(Vector3.Cross(right, lookAt));
Vector3 worldYaw = right * delta.X * lookAt.Length();
Vector3 worldPitch = localUp * delta.Y * lookAt.Length();
Position = Vector3.Normalize(Position + worldYaw + worldPitch) * Position.Length();
This works smoothly as it should and moves the camera around its target in any direction of my choosing.
The View Matrix
This is where I experience the problem mentioned in the overview above. My Up property was previously set to always be 0, 0, 1 due to my data being in ECR coordinates. However, this is what causes the axis alignment as I move the camera around and the view matrix is updated. I use the SharpDX method Matrix.CreateLookAtRH(Position, Target, Up) to create my view matrix.
After discovering that the Up vector used when creating the view matrix should be updated instead of always being 0, 0, 1, I encountered another issue. I now caused roll when yaw and pitch were introduced. This shouldn't occur due to a requirement so I immediately began pursing a fix.
Originally I performed a check to see if was coming close to being axis aligned, if I was, then I set the Up used to create my view matrix to the local up of the camera, and if I wasn't then I used only the Z axis of the local up to ensure that up was either up or down.
float dot = Math.Abs(Vector3.Dot(Up, Position) / (Up.Length() * Position.Length()));
if (dot > 0.98)
Up = localUp;
else
Up = new Vector3(0, 0, localUp.Z);
However, this was a bit jumpy and still didn't seem quite right. After some trial and error, along with some extensive research on the web trying to find potential solutions, I remembered how linear interpolation can transition smoothly from one value to another over a period of time. I then moved to using Vector3.Lerp instead:
float dot = Math.Abs(Vector3.Dot(Up, Position) / (Up.Length() * Position.Length()));
Up = Vector3.Lerp(new Vector3(0, 0, localUp.Z), localUp, dot);
This is very smooth, and only causes any roll when I am very near to being axis aligned which isn't enough to be noticeable by the every day user.
The Problem
My camera also has the ability to attach to a point other than 0, 0, 0, and in this case, the up vector for the camera is set to the normalized position of the target. This causes the original issue in the overview when using Vector3.Lerp as above; so, in the case where my camera is attached to a point other than 0, 0, 0 I do the following instead:
Up = Vector3.Lerp(Vector3.Normalize(Target), localUp, dot);
However, even this doesn't work and I have no idea how to get it to do so. I've been working at this problem for a few days now and have made an extensive effort to fix it, and this is a big improvement so far.
What can I do to prevent the violent spinning using Vector3.Lerp when the up isn't equivalent to 0, 0, z?
Imagine a vertical plane that is rotated around the vertical axis by yaw (ϕ):
The camera is only allowed to rotate with the plane or in the plane, its in-plane orientation given by the pitch (θ):
ϕ and θ should be stored and incremented with the input delta. With this setup, the camera will never tilt, and the local up direction can always be computed:
d and u are the local front and up directions respectively, and are always perpendicular (so alignment won't be an issue). The target can of course be taken as the position + d.
But wait, there's a catch.
Suppose if you move your mouse to the right; ϕ increases, and you observe:
If the camera is upright, the view rotates to the right.
If the camera is upside-down, the view rotates to the left.
Ideally this should be consistent regardless of the vertical orientation.
The solution is to flip the sign of increments to ϕ when the camera is upside down. One way would be to scale the increments by cos(θ), which also smoothly reduces the sensitivity as θ approaches 90 / 270 degrees so that there is no sudden change in horizontal rotational direction.
i'm currently programming an 2D topview unity game. And i want to set the camera such as, that just a specific area is visible. That means that i know the size of my area and when the camera, which is following the player currently reaches the border of the area, i want to visible stops.
So here is my question: i know where the camera is and how it can follow the player but i dont know how i can calculate the distance between the border of the field and the border of waht the camera sees. how can i do that?
Essentially, treat your playable area as a rectangle. Then, make a smaller rectangle within that rectangle that accounts for the camera's orthographic size. Don't forget to include the aspect ratio of your camera when calculating horizontal bounds.
Rect myArea; // this stores the bounds of your playable area
Camera cam; // this is your orthographic camera, probably Camera.main
GameObject playerObject; // this is your player
float newX = Mathf.Clamp(
playerObject.transform.position.x,
myArea.xMin + cam.orthographicSize * cam.aspect,
myArea.xMax - cam.orthographicSize * cam.aspect
);
float newY = Mathf.Clamp(
playerObject.transform.position.y,
myArea.yMin + cam.orthographicSize,
myArea.yMax - cam.orthographicSize
);
cam.transform.position = new Vector3(newX,newY,cam.transform.position.z);
If you're using an alternative plane (say xz instead of xy), just swap out the corresponding dimensions in all the calculations.
I'm trying to create an isometric (35 degrees) view by using a camera.
I'm drawing a triangle which rotates around Z axis.
For some reason the triangle is being cut at a certain point of the rotation
giving this result
I calculate the camera position by angle and z distance
using this site: http://www.easycalculation.com/trigonometry/triangle-angles.php
This is how I define the camera:
// isometric angle is 35.2º => for -14.1759f Y = 10 Z
Vector3 camPos = new Vector3(0, -14.1759f, 10f);
Vector3 lookAt = new Vector3(0, 0, 0);
viewMat = Matrix.CreateLookAt(camPos, lookAt, Vector3.Up);
//projectionMatrix = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4, Game.GraphicsDevice.Viewport.AspectRatio, 1, 100);
float width = GameInterface.vpMissionWindow.Width;
float height = GameInterface.vpMissionWindow.Height;
projMat = Matrix.CreateOrthographic(width, height, 1, 1000);
worldMat = Matrix.Identity;
This is how I recalculate the world matrix rotation:
worldMat = Matrix.CreateRotationZ(3* visionAngle);
// keep triangle arround this Center point
worldMat *= Matrix.CreateTranslation(center);
effect.Parameters["xWorld"].SetValue(worldMat);
// updating rotation angle
visionAngle += 0.005f;
Any idea what I might be doing wrong? This is my first time working on a 3D project.
Your triangle is being clipped by the far-plane.
When your GPU renders stuff, it only renders pixels that fall within the range (-1, -1, 0) to (1, 1, 1). That is: between the bottom left of the viewport and the top right. But also between some "near" plane and some "far" plane on the Z axis. (As well as doing clipping, this also determines the range of values covered by the depth buffer.)
Your projection matrix takes vertices that are in world or view space, and transforms them so that they fit inside that raster space. You may have seen the standard image of a view frustum for a perspective projection matrix, that shows how the edges of that raster region appear when transformed back into world space. The same thing exists for orthographic projections, but the edges of the view region are parallel.
The simple answer is to increase the distance to your far plane so that all your geometry falls within the raster region. It is the fourth parameter to Matrix.CreateOrthographic.
Increasing the distance between your near and far plane will reduce the precision of your depth buffer - so avoid making it any bigger than you need it to be.
I think your far plane is cropping it, so you should make bigger the projection matrix far plane argument...
projMat = Matrix.CreateOrthographic(width, height, 1, 10000);