I'd like to know what code is behind Camera:ScreenPointToRay may it be Unity or ROBLOX.
My situation is that I have a position in WorldSpace and the camera's ViewportSize,
I'd like to have a function that does ScreenPointToRay,
I'm assuming you'd only need those two parameters to create this function but I have no clue how.
May someone spoon-feed me on this?
There are several ways you can do this calculation. This version will only work for perspective cameras, but can be modified to work on orthographic cameras as well. Unfortunately I an unable to share code, but I will try to explain as best I can.
From your camera, you will need to generate a Matrix4x4 by inverting the result of multiplying the Camera.projectionMatrix by the Camera.worldToCameraMatrix.
You will need to convert your screen space pixel into a clip plane coordinate. In the clip plane -1,-1 is equivalent to 0,0, and 1,1 is equivalent to Camera.pixelWidth, Camera.pixelHeight.
Next you will use the Matrix4x4.MultiplyPoint function to multiply your clip point.
The direction of your ray is the result from step 3 minus the camera's position.
Construct the ray using the direction from step 4 and use the camera's position as the ray origin.
If you are trying to do this outside of the Unity API, the Matrix4x4.MultiplyPoint function will need to created as well.
You can check this article, it goes through using the matrices available on the Camera of Unity to create rays for all pixles inside a compute shader
The Roblox API has a section on the function you're looking for: ScreenPointToRay.
The Unity Scripting Reference also has a section their own ScreenPointToRay function.
Related
I'm trying to rotate a beam/cuboid around a pivot using MRTK, Unity, and the Hololens 1 when you're doing the pinch and hold gesture. The beam should remain in place once you've let go of the pinch.
My initial thoughts were to get the cartesian coordinates of the pinch and based on their position relative to the pivot, have the beam rotate by however many degrees needed. E.g. the hand position while pinching is (1,1,0), and the pivot position is (0,0,0). Thus, the beam should be rotated at 45 deg in the XY plane (we ignore the z components). I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2. (https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/Input/HandTracking.html#hand-tracking-events & https://microsoft.github.io/MixedRealityToolkit-Unity/api/Microsoft.MixedReality.Toolkit.Input.IMixedRealityHand.html#Microsoft_MixedReality_Toolkit_Input_IMixedRealityHand_TryGetJoint_).
Does anyone know how to go about doing this or at least point me in the right direction (tutorials/code/assets would be much appreciated!)
Thank you!
I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2
Yes, HoloLens1 does not support hand tracking, such as touching holograms directly with your hands or pointing and committing with hands. It is recommended that you try to use the interaction model Gaze and commit, so that you can easily get the position of GGVPointer.
Pinch to rotate interaction can be achieved by adding the ManipulationHandler component from MRTK to your cube. The component can be configured to allow two hand manipulation like this.
I'm not sure how to go about doing this as the documentation seems to indicate that the only way to get the coordinates of the hand/pinch only works for the Hololens 2.
There are a few ways to query pointer position. The code below should return the Right GGVPointer position for the hololens.
Vector3 pos;
GGVPointer pointer = PointerUtils.GetPointer<GGVPointer>(Handedness.Right);
if(pointer != null)
{
pos = pointer.Position;
}
In case you just want to rotate the Object around its center,you can use the Boundingbox Component. It creates handles that can be pinched and moved to rotate an object. you can disable the axis you don't want. It works even on the HoloLens 1.
I'm trying to learn some more about Vectors in a 2D space and how to use them in Gamedevelopment.
I have created a small project for visualising a 'projection' of Vector A onto Vector B in C# using the Monogame framework.
This is all working fine, but now I want to move my origin (which is currently in the top-left) to a custom position. So i can for example draw my lines in the middle of the screen.
I want to do this without any help from the library first to understand what is happening.
But I cant figure out how to do this and if this is actually best practice in Vector spaces or that I should just 'draw' my lines with an offset..
My understanding of Math symbols and functions is not great, so if you provide me with a mathematically answers please explain the symbols aswell.
EDIT:
I created another project for visualising if a point is within a certain angle, but this time i tried to draw everything with an offset (right) next to the original vectors (left).
As you can see it looks fine if i draw it with an offset, but i can't imagine this method being used in Games.. Mainly because everything has a weird offset (duh..) with respect to my mouse, so you would need to implement your own cursor (which games do, but still...)
EDIT2:
Let's make my problem a little bit clearer..
If you look at my second example. Imagine the origin on the right to be an Agent (NPC or Player or whatever) and the segment BC (and BC2) to be it's vision field.
If i want to calculate what is within it's vision, i can do that the same way how i did the example but this 'origin' point would be at (0,0) (top-left) and that is behind the Agent.
I'm probably missing something obvious and thinking way too hard about this..
So i finally found out how this works..
Appearently you work with different spaces or frames instead of moving the origin (also called reference).
A space can live inside another space, but let's keep it simple for now with 2 spaces.
First space is your 'main' space (most of the time called world in Gamedevelopment)
Second space is your 'view' space (or camera)
(i use world and view throughout this answer)
I was doing all my Vector calculations inside my world space. So when drawing these vectors to the screen, they are drawn at the positions with respect to the world's reference (which is the top-left of the screen).
To draw my vectors somewhere else i need to translate them.
Translation is moving vectors along the axis.
This action of 'changing' the position/scale/rotation of a vector is called Transformation.
We can see transformations in a vector space simply as a change from one space to another.
quote
This translation is done by a Translation Matrix (more info in the quote link).
So with the knowledge of these spaces and transformation i fixed my program.
All my vectors are initialized the same way as before, but when i draw my vectors to the screen i translate them according to a pre-defined translation matrix. I call this matrix my viewMatrix because it translates vectors from the world space to the view space.
But there is one thing that needs fixing.
The vector pointA is not defined in the world space, but in the view space.
So that means that when my mouse is on position (20,20), that this position is different from the position (20,20) in my world sapce.
To fix this i need to translate my pointA vector with the invert of the translation matrix. This will convert the vector into a vector inside the world space.
So that's about it..
It took me 2 days to figure this out..
Here is a fixed version of the second example.
Left: my world space
Right: my view space
Notice how my mouse is now properly aligned in my view space instead of in my world space
Here are some resources i collected along the way:
Article - World, View and Projection Transformation Matrices
The True Power of the Matrix (Transformations in Graphics) - Computerphile
RB Whitaker - Basic Matrices
Making a Game Engine: Transformations
I have some rotation code in Unity to make a rigidbody rotate and point to a target on axis, however I've encountered a strange little issue where it ignores vectors that are directly behind it until the vector moves just a bit.
This is a minor problem as roughly half of the potential targets will be behind the moving vehicle using this code - on a 2.5D plane. This is the snippet as it currently exists. What needs to be fixed to acknowledge targets that are directly behind this?
TargetLocation = TempTarget.position;
Vector3 Heading = TargetLocation - transform.position;
Vector3 Rotate = Vector3.Cross(-transform.up, (Heading).normalized) * 200;
boatRB.AddTorque(Rotate);
I guess, the problem is with torque as it doesn't work if you apply force in direction of axis of rotation. Which is in your case, when target is right behind the transform. So your rotate vector would be passing through axis of rotations. To overcome this problem, I would suggest that use dot product to find if the target is right behind the transform and use some different technique to rotate it. One good method can be using Transform.LookAtto rotate.
References:
Dot Product Application
Transform LookAt Method
I managed to fix this issue using a marker and rotation disc for my vehicle to follow when it turns. I originally wanted to purely use code for the issue, but this particular fix turned out to be far smoother and better in general, and it actually works better with my AI format.
I'm currently working on a camera for a game. But I got stuck at the rotation.
When I move my mouse across the x- or y-axis, I want the camera to rotate around my character.
What would be a formula to calculate this vector, if the distance to the character is always the same?
I am doing this in Unity, with C#, if this is of any help.
this function may help: transform.RotateAround(Vector3 axis, float degree)
you can read the Unity Script Reference for more info.
-oh and I think you should tag your next questions with "unity3d"
but you get the best Unity3d-help at UnityAnswers-Forum http://answers.unity3d.com/index.html.
You can utilise spherical coordinates — they seem to fit more than Euler angles for the purpose of camera movement. The Cartesian vector which you need can be obtained by simple formulas as described there.
what is the best way to position the Camera in a way that i can see what i paint in a certain region?
p.e. I'm painting a rectangle at around 300,400,2200. Where do i have to place the camera and which view do i have to set so that everything fits "in"?
Is there a trick or a special method or do i have to try it out with different camera positions?
There is no standard function that will position the camera this way because there are many options (think of different sides and rotations)
A trick you could use is:
Take the center of the MeshGeometry3D by using the Bounds property and add the normal vector several times to position the Camera.
Then use the normal vector of the plane, invert it and use it as the LookDirection for the camera.
How far you need to move the camera depends on the view angle of the camera. It can be calculated. Let me know if you want to know how (it will take me a little extra time)
More information can be found here too