Calculate distance between two objects not located together with identical transforms? - c#

Image link (for those who don't see the inline image)
I am trying to calculate the distance between objects, (trying both position and localPosition using the Vector3.Distance method) but, it would seem there is a problem.
As you can see in the image, the two objects in the scene are not located at the same position however, their transforms tell a different story - they have the exact same transform info (position, rotation, and scale).
Assured it is something simple being missed here but, how to calculate the distance between two objects which are not located at the same location yet have identical transforms?

The following solutions applies to the problem described in the original post, not to programmatically generated meshes
I see that one of them doesn't have its pivot centered.
Make both of your objects child of an empty gameobject, this way you can move them relative to their parent. This will be like changing their pivot.
Calculate the distance based on the parent gameobjects.
Example:
Parent object1 to an empty gameobject. Move object1. Notice the transform of the parent doesn't change. So, you just have to move object1 so that its center aligns with the parent transform. Then, from there, create a prefab. This is your new object1.
Do this for any objects you want to have control on their pivot.
ALTERNATIVE:
If the preceding solution doesn't suit your needs, an easy way to achieve what you want would be to calculate the distance based on the transform.renderer.bounds.center
Your calculation would then be based on the center of the bounds of the object, not its actual position, eliminating the pivot problem.

Related

How to set the pivot point for an object that is made by multiple objects in Unity?

So I used a skin editor to create the bone structure for a psb file,
the character is in the centre of the canvas but when I tried to flip it by scale it changes position.
I guess the gray ball in the picture is the pivot point so I googled how to change the pivot but the tutorials are all for a single game object. And I don't know why the flip in the sprite render didn't change anything. Here is my setup
A gameObject's parent's position is it's pivot point. So if you want to change the pivot point of your sprite, select all the children (all the gameobjects nested under "player_side_shadow"), and change their positions. When you change the position of a child, you're changing it's relative or local position in relation to its parent: its pivot point (when you rotate the parent).
edit
As pointed out by derHugo: if the sprite(s) are already animated then you don't want to break existing frames. Instead you would apply the same process but add another parent (an empty gameobject) over everything, and rotate/flip that new parent.

How do you scale a parent game-object without scaling the child game-object in Unity 3D? [duplicate]

I am new to Unity and Oculus. I have a bunch of images (their path and other information are loaded from a JSON file) which I am trying to render in a VR room. And want to give the user an experience that he can move these images within that room using Oculus Touch.
I placed an Empty Object which has a script that iterates through JSON and creates a prefab object (which has Rigid body,Box Collider and OVRGrabbable -so that it can be grabbed in VR, components). Moreover this prefab object has a script which is responsible to load images in a Sprite.
What is working?
Images are getting rendered and can be grabbed and moved.
What is not working as desired?
I followed this tutorial, and as shown here angles of the cube are persisted pretty fine. But when an image is grabbed and rotated, it doesn't persist it's side angles. As shown in following image:
Question
Is there any way that I could fix it? I tried to look for it online but as I am new to Unity I am not quite sure what exactly I am missing.
Any clues would be really appreciated.
I think the problem lies in your hierarchy. Your Images game object (the parent of your DisplayImage) has a scale of (1.88,1,1). Trying to rotate child objects whose parent object have non-uniform scale in Unity gives you funky results. Try resetting the scale of your Images object to (1,1,1) and see if that helps.
This behaviour occurs because of the nature of the "parent/child" relationship. The child's coordinate system is relative to it's parents. We can think of it as the parent defining a coordinate system for our children. This is what we call local space. So, when we perform any linear transformations on the parent, it's like we're performing them on the whole parent coordinate system that our children are using to define their orientation. So translating the parent will move all of its children with it, because our whole local coordinate space has moved with the parent. Note that our children will still have the same localPosition, but their global position will have changed.
The same logic applies to scaling and rotation. Rotating the parent essentially rotates the whole coordinate space that our child is using around the center point of the parent (which would be the point (0,0,0) in local space). So all children would then be rotated as if they were an extension of the parent object.
In our situation we've scaled our parent, thus scaling the whole coordinate system we use to define our child object. This means that anything using the parent's coordinate space will also be scaled according to the parent's scale. In our situation, the parent was scaled by (1.88,1,1). So everything attached to the parent was also scaled along the parent's X-Axis by 1.88, resulting in the weird effect seen in your screenshots. Despite rotating our child object, it's still scaled along the parent's X-axis.
(Link to the official documentation on this.)
The solution to this is to apply your linear transformations as deep in the hierarchy as possible. Here, instead of scaling the parent, scale the child. If the parent object needs to be scaled, or its scale changes on the fly, another solution would be to remove the child from the parent/child hierarchy and manipulate its transform based on the old parent's transform in a script. In this case you could set the unlinked child's position and rotation to the parent's position and rotation, but ignore the scale.
Following #Adam Stox's answer, I share you a solution that usually work in this case (link tho the original answer):
Don't child the objects directly to a non-uniformly scaled parent -
child this non-uniform object to an empty object, which will be the
parent of the other objects too; since an empty object is scaled
1,1,1, childing other objects to it don't cause this problem.
I don't know your object hierarchy, but in the link you can find an example, so you can adapt to your specfic case.
You could use the RigidbodyConstraints to lock the rotation in once axis.
The other option would be to fix the rotation in LateUpdate using a script.
If the problem only occurs on sprites, then try making it a child of a cube, where you disable the renderer.
There could be more than one things at play, and it's hard to tell without seing the prefab hierarchy or the source of OVRGrabbable.
One thing that comes to mind is that you should manipulate the position and rotation of the object through its rigidbody
The prefab could have a bunch of builtin transformation that persists, is everything properly align before you interact with the collider? Also somehow the rotation of the image to the collider is similar to the rotation of the collider relative to the world. Could it be that you nested two objects?

Persisting angle of solid objects while rotating

I am new to Unity and Oculus. I have a bunch of images (their path and other information are loaded from a JSON file) which I am trying to render in a VR room. And want to give the user an experience that he can move these images within that room using Oculus Touch.
I placed an Empty Object which has a script that iterates through JSON and creates a prefab object (which has Rigid body,Box Collider and OVRGrabbable -so that it can be grabbed in VR, components). Moreover this prefab object has a script which is responsible to load images in a Sprite.
What is working?
Images are getting rendered and can be grabbed and moved.
What is not working as desired?
I followed this tutorial, and as shown here angles of the cube are persisted pretty fine. But when an image is grabbed and rotated, it doesn't persist it's side angles. As shown in following image:
Question
Is there any way that I could fix it? I tried to look for it online but as I am new to Unity I am not quite sure what exactly I am missing.
Any clues would be really appreciated.
I think the problem lies in your hierarchy. Your Images game object (the parent of your DisplayImage) has a scale of (1.88,1,1). Trying to rotate child objects whose parent object have non-uniform scale in Unity gives you funky results. Try resetting the scale of your Images object to (1,1,1) and see if that helps.
This behaviour occurs because of the nature of the "parent/child" relationship. The child's coordinate system is relative to it's parents. We can think of it as the parent defining a coordinate system for our children. This is what we call local space. So, when we perform any linear transformations on the parent, it's like we're performing them on the whole parent coordinate system that our children are using to define their orientation. So translating the parent will move all of its children with it, because our whole local coordinate space has moved with the parent. Note that our children will still have the same localPosition, but their global position will have changed.
The same logic applies to scaling and rotation. Rotating the parent essentially rotates the whole coordinate space that our child is using around the center point of the parent (which would be the point (0,0,0) in local space). So all children would then be rotated as if they were an extension of the parent object.
In our situation we've scaled our parent, thus scaling the whole coordinate system we use to define our child object. This means that anything using the parent's coordinate space will also be scaled according to the parent's scale. In our situation, the parent was scaled by (1.88,1,1). So everything attached to the parent was also scaled along the parent's X-Axis by 1.88, resulting in the weird effect seen in your screenshots. Despite rotating our child object, it's still scaled along the parent's X-axis.
(Link to the official documentation on this.)
The solution to this is to apply your linear transformations as deep in the hierarchy as possible. Here, instead of scaling the parent, scale the child. If the parent object needs to be scaled, or its scale changes on the fly, another solution would be to remove the child from the parent/child hierarchy and manipulate its transform based on the old parent's transform in a script. In this case you could set the unlinked child's position and rotation to the parent's position and rotation, but ignore the scale.
Following #Adam Stox's answer, I share you a solution that usually work in this case (link tho the original answer):
Don't child the objects directly to a non-uniformly scaled parent -
child this non-uniform object to an empty object, which will be the
parent of the other objects too; since an empty object is scaled
1,1,1, childing other objects to it don't cause this problem.
I don't know your object hierarchy, but in the link you can find an example, so you can adapt to your specfic case.
You could use the RigidbodyConstraints to lock the rotation in once axis.
The other option would be to fix the rotation in LateUpdate using a script.
If the problem only occurs on sprites, then try making it a child of a cube, where you disable the renderer.
There could be more than one things at play, and it's hard to tell without seing the prefab hierarchy or the source of OVRGrabbable.
One thing that comes to mind is that you should manipulate the position and rotation of the object through its rigidbody
The prefab could have a bunch of builtin transformation that persists, is everything properly align before you interact with the collider? Also somehow the rotation of the image to the collider is similar to the rotation of the collider relative to the world. Could it be that you nested two objects?

Unity 2D - Dynamically instantiate new prefab inside NGUI's "UI Root"(as Parent)

Im new to unity. When I instantiate a new prefab GameObject from inside script as follows:
GameObject newArrow = (GameObject)Instantiate(arrowPrefab);
newArrow.transform.position = arrowSpawnTransform.position;
But this is creating the object in the root hierarchy(and not inside "UI Root" of NGUI). When I add any object outside of the UI-Root (of NGUI) it adds it to some location far away from the camera, also with a huge dimension. Can someone help me with how to add the newly created prefab under "UI Root" ?
It would be great of someone also lets me know about the positioning and scaling associated with native unity and NGUI. I try hard but am not understanding where to keep what object and in what size so that it comes out as expected. I'll appreciate any help that can be provided. Thanks !
EDIT:
I have found a way to place the new prefab inside "UI Root" thru:
newArrow.transform.parent = gameObject.transform.parent;
after instantiating.
But still the scaling is huge. It's like multiple times bigger than the screen size. Please help me with this. What should I be doing ?
When working with UI elements in NGUI, don't use Instantiate. Use NGUITools.AddChild(parent, prefab).
NGUI's scale with respect to the rest of the objects in the scene is rather microscopic. If you look at the UIRoot object, you will notice that its scale is measured in thousandths of a meter (default object resolution of 1 typically represents a meter). That is why when you instantiate a prefab and attach it to any object under the UIRoot it appears gigantic relative to the scale of the UIPanel. You could manually scale down the object to the appropriate size, or let NGUI do it for you (as indicated by Dover8) by utilizing NGUITools.AddChild(parent, prefab). In fact, this is the proper way to do so. Don't try to do it manually. The results will be unpredictable and can lead to inappropriate behavior of components. Here's a link to Tasharen's forum on this topic: http://www.tasharen.com/forum/index.php?topic=779.0
Positioning is a bit more complicated. Everything is relative to anchors. Anchor positions are a relationship between a parent object (usually a panel) and a target (usually some form of widget such as a sprite or label). They are controlled by four values relative to the edges of the panel (or parent object), right, left, bottom, and top with respect to the edges of the target (varies by position). Each of these are modified by an integer value (in pixels) that modify the dimensions of the target relative to the parent. There are many examples included with NGUI. I suggest that you look over them. In particular, pay attention to Example 1 - Anchors. I learned a lot from studying these examples, but they don't really cover the full power of NGUI, but they are an excellent starting point.

How to determine if there is an object lying on the Plane at particular coordinates?

I am implementing A* algorithm in 3D environment and I have came to a point where I need to determine whether there is something lying on a plane (on which my characters will be walking) at particular coordinates.
I have created a class Board which holds the map of of Nodes (each holds center of its coordinates). So we can say I have discretized the Plane to something similar like a chessborad. Now I need to know whether there is something on each Node to create a walkable/unwalkable map on this plane.
How can I do this in Unity3D ? Raycasting ?
EDIT
There is one thing that I can think of but I think it's a bit inefficient :
Create a temporary collider ( take area of Board's tile and some height ) and check whether there is something colliding with it and then keep translating it for every tile in the Board.
Do you think this would be a good way ?
You could use a raycast (Physics.Raycast) from each node coordinate. Make sure objects you are checking for have colliders. This will only check for a single point, however, not for the entire area of the node.
To check for an area, not just a point, above each node, you can use a sphere or capsule cast or check. See the choices in the list of class functions for Physics.
Another approach is to have a game object with an appropriately shaped trigger collider on each node in your scene. You could keep track of how many other objects (with colliders) are on each node by incrementing and decrementing a counter in the OnTriggerEnter and OnTriggerExit methods of a script (i.e. a MonoBehaviour subclass).
Thanks to Ghopper21's answer I came up with ShpereCasting each Node from above it e.g. Node (0,0,0) would be Spherecasted from (0,100,0).

Categories

Resources