I am using the bounds.extents to represent the radius of a sprite in Unity. In my simulation I am changing the size of the sprite using transform.localScale. When I want to spawn new sprites I want to spawn them so that the radius won't exceed my ground (represented as a plane). Thus I am making sure that the new sprite is not spawned within a range of bounds.extents to the edge of the plane. But when the sprites reaches their maximum radius they exceed the edge of the plane. So my question is, what is the relation between bounds.extents and transform.localScale?
You have to make sure that the radius you allow the sprites to be placed in is less than the extents of the plane and the half the size of the sprite. because when you place the sprite on the edge of the radius, the center of it is on the borders of the plane, so half of it is outside it. Did I understand the problem correctly?
As for the relation, the bounds.extents describes half the size of the sprite in units, while transform.localscale is the scale relative to the object's parent's scale. It is also an indication of the current size compared to the original size of the sprite, it doesn't indicate the size in units.
So assuming the parent's scale is 1:
bounds.extents = (original bounds.extents) * transform.localScale
Related
I want to change the floor's vertex normals' direction as the ball is rolling on the floor. I just need some direction on how to achieve this.
So far this is the direction that I'm heading:
Make a copy of all the vertex normals of the floor on start.
On collision get the contact point and raycast/spherecast/boxcast to get the affected vertices. (Set variable offset to control how much vertices I want to be affected by the casting)
Find normals related to the vertices.
Rotate the affected normals parrallel to the ball's closest surface point.
As ball moves away from affected's floor's vertices, slowly return the floor normals back to original direction. (Set a variable to control the movement speed of the normal's rotating back to original direction)
I just need help figuring out which type of casting to use and how to rotate the normals parallel to the ball's surface. This is for a mobile platform so performance is a must.
Thanks in advance.
Here's how you'd go about modifying the normals:
Mesh mesh = GetComponent<MeshFilter>().mesh;
Vector3[] vertices = mesh.vertices;
Vector3[] normals = mesh.normals;
You'd want to use the vertices list to figure out which indexes to modify (presumably also needing to convert from local space to world space). You could then raycast from the worldspace coordinate to the ball's center,1 and use the raycasthit.normal to figure out what the angle to the ball is.
Some clever vector math from there to figure out the new normal for your plane:
Find the vector perpendicular between hit.normal and Vector3.Up: this vector will be parallel to the plane. If the two vectors are parallel, dump out: your normal is unchanged (or should be returned to its original value, which will be the same vector as the raycast to find the sphere).
Find the vector perpendicular to that vector and hit.normal: this vector will be your new normal.
1 Actually, you'll want to know how far down from the ball's center you should target, otherwise, you'll get the most extreme offsets as the ball moves farther away from the plane. So you want the ball's position on X and Z, but a fixed offset up from the plane for Y. This won't be too difficult to calculate.
I would try something like:
Create a texture for the normals. Each pixel is a normal of a vertex(like a grid). Calculate the correspoding coord between the 3d ball and the position of the normal in the texture and draw a ball/circle/sprite on it (like a sprite) each frame. Then you could use a compute shader to revert them slowy to the default up vector.
I am using this code:
void Update()
{
accel = Input.acceleration.x;
transform.Translate (0, 0, accel);
transform.position = new Vector3 (Mathf.Clamp (transform.position.x,-6.9f, 6.9f), -4.96f, 18.3f);
}
It makes my gameObject move the way I want it BUT the problem is that when I put the app on my phone, (-6.9) and (6.9) are not the "ends" of my screen. And I cannot figure out how to change those values according to every screen size?
This is going to be a bit of a longer answer, so please bear with me here.
Note that this post is assuming that you are using an orthographic camera - the formula used won't work for perspective cameras.
As far as I can understand your desire is to keep your object inside of the screen boundaries. Screen boundaries in Unity are determined by a combination of camera size and screen size.
float height = Camera.main.orthographicSize * 2;
A camera's orthographic size determines the half of the screen's size in world units. Multiplying this value results in the amount of world units between the top and bottom of the screen.
To get the width from this value, we take the value and multiplying it by the screen's width divided by the screen's height.
float width = height * Screen.width / Screen.height;
Now we have the dimensions of our screen, but we still need to keep the object inside those bounds.
First, we create an instance of the type Bounds, which we will use to determine the maximum and minimum values for the position.
Bounds bounds = new Bounds (Vector3.zero, new Vector3(width, height, 0));
Note that we used Vector3.zero since the center of our bounds instance should be the world's center. This is the center of the area that your object should be able to move inside.
Lastly, we clamp the object's position values, according to our resulting bounds' properties.
Vector3 clampedPosition = transform.position;
clampedPosition.x = Mathf.Clamp(clampedPosition.x, bounds.min.x, bounds.max.x);
transform.position = clampedPosition;
This should ensure that your object will never leave the screen's side boundaries!
I'm developing a little game where I generate rooms of different size and would like this randomly generated room to be always visible on the screen without caring about it's size. The camera is on a top view angle (rotation = 90,0,0).
I tried to create a relationship between it's size and the Y axis position of the camera to make it always visible but it wasn't successful. There is the solution where the object is kept on the bottom left corner of the screen but if the object is too big only a part of it is visible by the camera. I really have no more idea ^^
Thank you for your help !
I guess you are using an Orthographic camera. For the Orthographic Camera, the size is the number of unity units from the center of the camera to the top/bottom of the screen. The width is then based on the aspect ratio. So, if you know how big the objects are that should be easy!
You can get or set the main camera size with Camera.main.orthographicSize
then get/set the aspect ratio (width/height) with Camera.main.aspect
and you can reset it after with Camera.main.ResetAspect();
i'm currently programming an 2D topview unity game. And i want to set the camera such as, that just a specific area is visible. That means that i know the size of my area and when the camera, which is following the player currently reaches the border of the area, i want to visible stops.
So here is my question: i know where the camera is and how it can follow the player but i dont know how i can calculate the distance between the border of the field and the border of waht the camera sees. how can i do that?
Essentially, treat your playable area as a rectangle. Then, make a smaller rectangle within that rectangle that accounts for the camera's orthographic size. Don't forget to include the aspect ratio of your camera when calculating horizontal bounds.
Rect myArea; // this stores the bounds of your playable area
Camera cam; // this is your orthographic camera, probably Camera.main
GameObject playerObject; // this is your player
float newX = Mathf.Clamp(
playerObject.transform.position.x,
myArea.xMin + cam.orthographicSize * cam.aspect,
myArea.xMax - cam.orthographicSize * cam.aspect
);
float newY = Mathf.Clamp(
playerObject.transform.position.y,
myArea.yMin + cam.orthographicSize,
myArea.yMax - cam.orthographicSize
);
cam.transform.position = new Vector3(newX,newY,cam.transform.position.z);
If you're using an alternative plane (say xz instead of xy), just swap out the corresponding dimensions in all the calculations.
Is there a way that I could basically develop my XNA game at 1080p (or 720p) as my default resolution and then depending upon the set resolution just scale everything in the game to the proper size, without having to set the scaling factor in every Sprite Draw() method?
My thought is that, I could develop all the graphics, configure coordinates, etc based on a resolution of 1080p but then for the XBOX just set the res to 720p and scale down (so that the XBOX sees everything as being at 720 and therefore handles all resolutions as mentioned in the developer docs) and on PC scale to any needed resolution or aspect ratio by automatically letterboxing the view for resolutions that are not 16:9.
I already had my game setup so that spritebatch.begin() and end() were called at the highest level, around all the other Draw calls, so that I could technically just pass in the scaling matrix there, but whenever I do that it will do something weird like make the view off center, or only take up a quarter of the screen.
Is there a best practice way for achieving this?
If you set a scaling matrix in SpriteBatch.Begin, then it will scale the sizes and positions of every single sprite you draw, with that SpriteBatch, up until you call End.
SpriteBatch uses a client space where zero is the upper left corner of the Viewport, and one unit in that space is equivalent to one pixel in the viewport.
When you give SpriteBatch a transformation, the sprites that you draw will have that transformation applied to them for you, before they are drawn. So you can (and should) use this same technique to translate your scene (to centre it on your player, for example).
For Example:
Your game is developed at 720p and you're using SpriteBatch without a transformation. You have a sprite centred at the bottom right corner. Let's say its texture is (32, 32) pixels and the sprite's origin is (16, 16) (origin is specified in texture space, so this is the centre of the sprite). The sprite's position is (1280, 720). The sprite's scale is 1, which makes its resulting size (32, 32). You will see the top left quarter of the sprite at the bottom-right corner of your screen.
Now you move to a screen that is 1080p (1.5 times bigger than 720p). If you don't add a scaling matrix to SpriteBatch, you can see the whole sprite, with its at two-thirds of the screen, rightwards and downwards.
But you want to scale up your whole scene so that at 1080p it looks just like it did at 720p. So you add the matrix Matrix.CreateScale(1.5f, 1.5f, 1f) (note using 1 for Z axis, because this is 2D not 3D) to your SpriteBatch.Begin, and do nothing else.
Now your scene of sprites will be scaled 1.5 times bigger. Without making any changes to the actual Draw call, your sprite will be drawn at position (1920, 1080) (the bottom right corner of the screen), its size will be (48, 48) (1.5 times bigger), and its origin will still be the centre. You will see the sprite's top-left quadrant at the bottom-right corner of your screen, just like you did at 720p, and at the same relative size.