I am new to Unity and Oculus. I have a bunch of images (their path and other information are loaded from a JSON file) which I am trying to render in a VR room. And want to give the user an experience that he can move these images within that room using Oculus Touch.
I placed an Empty Object which has a script that iterates through JSON and creates a prefab object (which has Rigid body,Box Collider and OVRGrabbable -so that it can be grabbed in VR, components). Moreover this prefab object has a script which is responsible to load images in a Sprite.
What is working?
Images are getting rendered and can be grabbed and moved.
What is not working as desired?
I followed this tutorial, and as shown here angles of the cube are persisted pretty fine. But when an image is grabbed and rotated, it doesn't persist it's side angles. As shown in following image:
Question
Is there any way that I could fix it? I tried to look for it online but as I am new to Unity I am not quite sure what exactly I am missing.
Any clues would be really appreciated.
I think the problem lies in your hierarchy. Your Images game object (the parent of your DisplayImage) has a scale of (1.88,1,1). Trying to rotate child objects whose parent object have non-uniform scale in Unity gives you funky results. Try resetting the scale of your Images object to (1,1,1) and see if that helps.
This behaviour occurs because of the nature of the "parent/child" relationship. The child's coordinate system is relative to it's parents. We can think of it as the parent defining a coordinate system for our children. This is what we call local space. So, when we perform any linear transformations on the parent, it's like we're performing them on the whole parent coordinate system that our children are using to define their orientation. So translating the parent will move all of its children with it, because our whole local coordinate space has moved with the parent. Note that our children will still have the same localPosition, but their global position will have changed.
The same logic applies to scaling and rotation. Rotating the parent essentially rotates the whole coordinate space that our child is using around the center point of the parent (which would be the point (0,0,0) in local space). So all children would then be rotated as if they were an extension of the parent object.
In our situation we've scaled our parent, thus scaling the whole coordinate system we use to define our child object. This means that anything using the parent's coordinate space will also be scaled according to the parent's scale. In our situation, the parent was scaled by (1.88,1,1). So everything attached to the parent was also scaled along the parent's X-Axis by 1.88, resulting in the weird effect seen in your screenshots. Despite rotating our child object, it's still scaled along the parent's X-axis.
(Link to the official documentation on this.)
The solution to this is to apply your linear transformations as deep in the hierarchy as possible. Here, instead of scaling the parent, scale the child. If the parent object needs to be scaled, or its scale changes on the fly, another solution would be to remove the child from the parent/child hierarchy and manipulate its transform based on the old parent's transform in a script. In this case you could set the unlinked child's position and rotation to the parent's position and rotation, but ignore the scale.
Following #Adam Stox's answer, I share you a solution that usually work in this case (link tho the original answer):
Don't child the objects directly to a non-uniformly scaled parent -
child this non-uniform object to an empty object, which will be the
parent of the other objects too; since an empty object is scaled
1,1,1, childing other objects to it don't cause this problem.
I don't know your object hierarchy, but in the link you can find an example, so you can adapt to your specfic case.
You could use the RigidbodyConstraints to lock the rotation in once axis.
The other option would be to fix the rotation in LateUpdate using a script.
If the problem only occurs on sprites, then try making it a child of a cube, where you disable the renderer.
There could be more than one things at play, and it's hard to tell without seing the prefab hierarchy or the source of OVRGrabbable.
One thing that comes to mind is that you should manipulate the position and rotation of the object through its rigidbody
The prefab could have a bunch of builtin transformation that persists, is everything properly align before you interact with the collider? Also somehow the rotation of the image to the collider is similar to the rotation of the collider relative to the world. Could it be that you nested two objects?
Related
So I used a skin editor to create the bone structure for a psb file,
the character is in the centre of the canvas but when I tried to flip it by scale it changes position.
I guess the gray ball in the picture is the pivot point so I googled how to change the pivot but the tutorials are all for a single game object. And I don't know why the flip in the sprite render didn't change anything. Here is my setup
A gameObject's parent's position is it's pivot point. So if you want to change the pivot point of your sprite, select all the children (all the gameobjects nested under "player_side_shadow"), and change their positions. When you change the position of a child, you're changing it's relative or local position in relation to its parent: its pivot point (when you rotate the parent).
edit
As pointed out by derHugo: if the sprite(s) are already animated then you don't want to break existing frames. Instead you would apply the same process but add another parent (an empty gameobject) over everything, and rotate/flip that new parent.
I am new to Unity and Oculus. I have a bunch of images (their path and other information are loaded from a JSON file) which I am trying to render in a VR room. And want to give the user an experience that he can move these images within that room using Oculus Touch.
I placed an Empty Object which has a script that iterates through JSON and creates a prefab object (which has Rigid body,Box Collider and OVRGrabbable -so that it can be grabbed in VR, components). Moreover this prefab object has a script which is responsible to load images in a Sprite.
What is working?
Images are getting rendered and can be grabbed and moved.
What is not working as desired?
I followed this tutorial, and as shown here angles of the cube are persisted pretty fine. But when an image is grabbed and rotated, it doesn't persist it's side angles. As shown in following image:
Question
Is there any way that I could fix it? I tried to look for it online but as I am new to Unity I am not quite sure what exactly I am missing.
Any clues would be really appreciated.
I think the problem lies in your hierarchy. Your Images game object (the parent of your DisplayImage) has a scale of (1.88,1,1). Trying to rotate child objects whose parent object have non-uniform scale in Unity gives you funky results. Try resetting the scale of your Images object to (1,1,1) and see if that helps.
This behaviour occurs because of the nature of the "parent/child" relationship. The child's coordinate system is relative to it's parents. We can think of it as the parent defining a coordinate system for our children. This is what we call local space. So, when we perform any linear transformations on the parent, it's like we're performing them on the whole parent coordinate system that our children are using to define their orientation. So translating the parent will move all of its children with it, because our whole local coordinate space has moved with the parent. Note that our children will still have the same localPosition, but their global position will have changed.
The same logic applies to scaling and rotation. Rotating the parent essentially rotates the whole coordinate space that our child is using around the center point of the parent (which would be the point (0,0,0) in local space). So all children would then be rotated as if they were an extension of the parent object.
In our situation we've scaled our parent, thus scaling the whole coordinate system we use to define our child object. This means that anything using the parent's coordinate space will also be scaled according to the parent's scale. In our situation, the parent was scaled by (1.88,1,1). So everything attached to the parent was also scaled along the parent's X-Axis by 1.88, resulting in the weird effect seen in your screenshots. Despite rotating our child object, it's still scaled along the parent's X-axis.
(Link to the official documentation on this.)
The solution to this is to apply your linear transformations as deep in the hierarchy as possible. Here, instead of scaling the parent, scale the child. If the parent object needs to be scaled, or its scale changes on the fly, another solution would be to remove the child from the parent/child hierarchy and manipulate its transform based on the old parent's transform in a script. In this case you could set the unlinked child's position and rotation to the parent's position and rotation, but ignore the scale.
Following #Adam Stox's answer, I share you a solution that usually work in this case (link tho the original answer):
Don't child the objects directly to a non-uniformly scaled parent -
child this non-uniform object to an empty object, which will be the
parent of the other objects too; since an empty object is scaled
1,1,1, childing other objects to it don't cause this problem.
I don't know your object hierarchy, but in the link you can find an example, so you can adapt to your specfic case.
You could use the RigidbodyConstraints to lock the rotation in once axis.
The other option would be to fix the rotation in LateUpdate using a script.
If the problem only occurs on sprites, then try making it a child of a cube, where you disable the renderer.
There could be more than one things at play, and it's hard to tell without seing the prefab hierarchy or the source of OVRGrabbable.
One thing that comes to mind is that you should manipulate the position and rotation of the object through its rigidbody
The prefab could have a bunch of builtin transformation that persists, is everything properly align before you interact with the collider? Also somehow the rotation of the image to the collider is similar to the rotation of the collider relative to the world. Could it be that you nested two objects?
Image link (for those who don't see the inline image)
I am trying to calculate the distance between objects, (trying both position and localPosition using the Vector3.Distance method) but, it would seem there is a problem.
As you can see in the image, the two objects in the scene are not located at the same position however, their transforms tell a different story - they have the exact same transform info (position, rotation, and scale).
Assured it is something simple being missed here but, how to calculate the distance between two objects which are not located at the same location yet have identical transforms?
The following solutions applies to the problem described in the original post, not to programmatically generated meshes
I see that one of them doesn't have its pivot centered.
Make both of your objects child of an empty gameobject, this way you can move them relative to their parent. This will be like changing their pivot.
Calculate the distance based on the parent gameobjects.
Example:
Parent object1 to an empty gameobject. Move object1. Notice the transform of the parent doesn't change. So, you just have to move object1 so that its center aligns with the parent transform. Then, from there, create a prefab. This is your new object1.
Do this for any objects you want to have control on their pivot.
ALTERNATIVE:
If the preceding solution doesn't suit your needs, an easy way to achieve what you want would be to calculate the distance based on the transform.renderer.bounds.center
Your calculation would then be based on the center of the bounds of the object, not its actual position, eliminating the pivot problem.
I'm developing an android augmented reality game using unity 3d engine and vuforia extension where I need to move a character over a image target.
The problem is when I associate the character to the image target (as a child of image target) the movement is like the character is "glued" to the plane, it barely moves from his position, it moves very slowly.
I already tested without using augmented reality and the moving of the character is completly fine, so I dont know what am I doing wrong ...
Thanks in advance.
There's no need for the object to compulsorily be a child of the ImageTarget. AR will work both ways.
Wherever you place your Object, it will calculate the relative distance of the ImageTarget and place the Object there automatically. So if your Object is on the Image Target(not a child) It will show the object on top, as it is. Just make sure you don't make the AR Camera a parent/child of any other object.
As for the object following the object, you can simply use the Tracking functions to Enable and Disable tracking, so the object disappears
I really hope this solves your problem.
I think I know what it is! Check your Inspector Panel to see if you've got a Rigidbody Element. They're usually very difficult to handle, try unchecking the Rigidbody element and see if it's solves anything.
Im new to unity. When I instantiate a new prefab GameObject from inside script as follows:
GameObject newArrow = (GameObject)Instantiate(arrowPrefab);
newArrow.transform.position = arrowSpawnTransform.position;
But this is creating the object in the root hierarchy(and not inside "UI Root" of NGUI). When I add any object outside of the UI-Root (of NGUI) it adds it to some location far away from the camera, also with a huge dimension. Can someone help me with how to add the newly created prefab under "UI Root" ?
It would be great of someone also lets me know about the positioning and scaling associated with native unity and NGUI. I try hard but am not understanding where to keep what object and in what size so that it comes out as expected. I'll appreciate any help that can be provided. Thanks !
EDIT:
I have found a way to place the new prefab inside "UI Root" thru:
newArrow.transform.parent = gameObject.transform.parent;
after instantiating.
But still the scaling is huge. It's like multiple times bigger than the screen size. Please help me with this. What should I be doing ?
When working with UI elements in NGUI, don't use Instantiate. Use NGUITools.AddChild(parent, prefab).
NGUI's scale with respect to the rest of the objects in the scene is rather microscopic. If you look at the UIRoot object, you will notice that its scale is measured in thousandths of a meter (default object resolution of 1 typically represents a meter). That is why when you instantiate a prefab and attach it to any object under the UIRoot it appears gigantic relative to the scale of the UIPanel. You could manually scale down the object to the appropriate size, or let NGUI do it for you (as indicated by Dover8) by utilizing NGUITools.AddChild(parent, prefab). In fact, this is the proper way to do so. Don't try to do it manually. The results will be unpredictable and can lead to inappropriate behavior of components. Here's a link to Tasharen's forum on this topic: http://www.tasharen.com/forum/index.php?topic=779.0
Positioning is a bit more complicated. Everything is relative to anchors. Anchor positions are a relationship between a parent object (usually a panel) and a target (usually some form of widget such as a sprite or label). They are controlled by four values relative to the edges of the panel (or parent object), right, left, bottom, and top with respect to the edges of the target (varies by position). Each of these are modified by an integer value (in pixels) that modify the dimensions of the target relative to the parent. There are many examples included with NGUI. I suggest that you look over them. In particular, pay attention to Example 1 - Anchors. I learned a lot from studying these examples, but they don't really cover the full power of NGUI, but they are an excellent starting point.