I am moving a .obj 3D model horizontally and upon reaching the extreme left it expands which looks really weird.
I even tried to change projection from perspective to orthographic and it makes it look even weirder.
// simple movement code
void Update ()
{
transform.Translate(Input.GetAxis("Horizontal") * Time.deltaTime * 3, 0f, 0f);
}
It's an unavoidable effect of rendering a 3D space on a 2D plane.
The only way to eliminate perspective distortion is to resort to
something like fisheye projection, which makes the image appear
distorted in a circular fashion. Or go orthographic.
Edge Distortion on Camera - Unity Forums
Related
I'm working on an RTS game with some pretty extensive UI, so I moved the main camera's output to a quad which only makes up about half the screen, and I'm blitting some UI effects over the rest. My current way of interacting with the game uses unity's Input.mousePosition. When I moved the camera's feed to the quad, obviously those pixel coordinates were distorted, so I fixed them like this:
mapMousePos = (Input.mousePosition * mscaleCorr - mapCorrection * mscaleCorr);
mapCorrection being the pixel offset of the smaller feed, and mscaleCorr being a magic number that got through trial and error — a temporary fix.
Point is, now I'm realizing that running this game at a different resolution will almost certainly break these magic numbers.
What I want mapMousePos to be is what Input.mousePosition was before I moved the gameplay to the small quad - going from (0,0) in the bottom left of the quad to the screen (width, height) in the top right of the quad. This is just so it works with screenToWorld point really nicely on my gameplay camera.
I have the camera-feed quad parented to a full-screen quad, and tried using their relative positions to apply the necessary transformations, but it didn't work, I'm guessing because it's a pixel problem.
I've dug around the docs for a solution using Camera's builtin worldToScreenPoint function, without any luck. I'm sure I'll bump into a fix eventually, but would greatly appreciate any pointers.
Here's what I've come up with; it's stupid, but it works.
I've placed objects at the bottom left and top right of the quad, stored in code as bL and tR.
Then I convert the mousePosition to a worldPosition using ScreenToWorldPoint(), remap it by subtracting the bottom left position, and get it as a percentage across the screen by dividing it by the delta to the top right. Multiply the percentage by the pixel dimensions of the gameplay camera, and voila.
In code, this:
Vector3 wPos = finalcam.ScreenToWorldPoint(Input.mousePosition);
wPos -= bL.transform.position;
mapMousePos = new Vector2(Mathf.Abs(wPos.x), Mathf.Abs(wPos.y));
mapMousePos = new Vector2(
mapMousePos.x / (tR.transform.position.x -bL.transform.position.x),
mapMousePos.y / (tR.transform.position.y - bL.transform.position.y));
mapMousePos = new Vector2(mapMousePos.x * Camera.main.pixelWidth, mapMousePos.y * Camera.main.pixelHeight);
Again, it's dumb, but it seems to work. I'm leaving this up in case anybody knows a cleaner method.
I have a laser turret in Unity3D, which I'd like to turn towards the enemies. The turret consists of a "leg" and a "head" (selected on the picture 1). The head can pan and tilt around a spherical joint.
I do the following:
Vector3 targetDir = collision.gameObject.transform.position - turretHead.transform.position;
float step = turnSpeed * Time.deltaTime;
Vector3 newDir = Vector3.RotateTowards(turretHead.transform.forward, targetDir, step, 0.0f);
turretHead.transform.rotation = Quaternion.LookRotation(newDir);
The problem is that since the pivot of the head is not aligned with the laser beam, the turret turns into the almost right direction, but it shoots above the target. (It would hit perfectly, if the laser would come out of the red axis of the pivot.)
Is there a builtin method or some trick to achieve the correct functionality other then doing the calculation myself?
Okay, here's the quick and easy way to do this. It's probably "better" to do it with proper trig, but this should give you the result you want pretty quick:
If you don't already have a transform aligned with the barrel, then create an empty GameObject and line it up (make sure it's a child of the turret so they move together). Add a reference to your script for it's transform.
Then, in your first line, calculate from the new Barrel transform instead of the turretHead transform. Leave everything else the same. This way it calculates from the turret barrel, but moves the turret head.
Now, this approach isn't perfect. If the pivot center is too offset from the barrel transform, then it would be less accurate over large moves, or when aiming at something close by, because the expected position when aiming would be different than the initial position due to the rotation pivot being elsewhere. But this can be solved with iteration, as the calculation would become more accurate the closer it is to it's desired goal.
I'm really new to OpenGL and trying to learn from the 'modern' OpenGL tutorials.
I'm using C# and OpenTK.
I'm setting my projectionMatrix with this code:
_projectionMatrix = Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver3, GetAspectRatio(), 0.1f, 100f);
After I set it, I upload it to the shaders, use for transformations, etc. Everything is working well this way, as the image below shows:
Now if I set my projection matrix this way, the result is very bad as the next image shows:
_projectionMatrix = Matrix4.CreateOrthographic(GLControl.ClientSize.Width, GLControl.ClientSize.Height, 0.1f, 100f);
Not only the new view is very strange, but if I move forward the scene, the objects starts to clipping, even having znear close to zero.
How can I set the orthographic projection correctly so I can have a view close to what I have in perspective?
It seems like you're having trouble understanding orthographic projections.
There are several reasons why your scene doesn't look right in an orthographic view:
The perspective projection gives the appropriate
depth to the floor so it can be seen; objects look bigger when they're closer to you. But the orthographic projection doesn't change their size according to their depth. Moreover, the reason the floor "disappears" in the orthographic projection is that the floor
is "infinitely thin" and oriented horizontally.
Your orthographic projection resizes the screen so that one pixel in window space
corresponds to 1 unit in object space. Since the rings are (it seems) about 12 units in size, they will look rather small on the screen. One solution may be to use a smaller size in CreateOrthographic. But this won't solve everything; for instance, the floor will still remain "invisible".
Note also:
If you use an orthographic projection, you can even use negative values for zNear,
but probably not 0. This isn't the case for a perspective projection.
The orthographic projection can form the basis for a 3D-like projection called the isometric projection.
Ok, this is hard to explain without pictures. I made a very limited model of a glock in Blender with an extruded cube making up the base and a scaled cube making up the barrel.
In Blender, It looks fine:
However, after exporting the model to .fbx and loading it into the compiler, it comes out like this:
I don't know exactly what is going on. Everything on the model is properly UV Mapped and the coordinates are correct, It just seems the translation is.. off..
Here is my drawModel code:
private void DrawModel(Model model, Matrix world, Matrix view, Matrix projection)
{
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.World = world;
effect.View = view;
effect.Projection = projection;
}
mesh.Draw();
}
}
Any pointers will be helpful!
EDIT: After Applying rotation, location, and scale, I was able to get them in the right position, but why does it look transparent?
Thank you for all the help!
As Nikola mentioned, in the later versions of Blender, they have proper XNA support.
Here's a screenshot of the settings I chose when exporting a model:
Why this happens:
Your Depth Buffer is not enabled, so when your GPU draws the barrel first, then the stock/handle, the handle is drawn on top of the barrel, even though some parts of it are supposed to be invisible because they would be hidden by the barrel.
How to fix:
Enable Depth Buffer as other answer suggests. And/Or order meshes in your model based on distance from camera, and draw tehm in order from the nearset to the farthest. This may not seem usefull now, but when you start working with transparent objects, you will see why this is a good thing to learn.
Enable depth buffer:
GraphicsDevice.DepthStencilState = DepthStencilStates.Default;
And to Draw a model in Xna you can follow this tutorial, that shows how the bone transforms have to be applied to get the meshes in right position.
Using OpenTK, I've created a window (800x600) with a vertical FOV of 90°.
I want to make a 2D game with a background image that fits on the whole screen.
What I want is the plane at a variable z coordinate as a RectangleF.
Currently my code is:
var y = (float)(Math.Tan(Math.PI / 4) * z);
return new RectangleF(aspectRatio * -y, -y, 2 * aspectRatio * y, 2 * y);
The rectangle calculated by this is always a little to small, this effect seems to decrease with z increasing.
Hoping someone will find my mistake.
I want to make a 2D game with a background image that fits on the whole screen.
Then don't bother with perspective calculations. Just switch to an orthographic projection for drawing the background, disabling depth writes. Then switch to a perspective projection for the rest.
OpenGL is not a scene graph, it's a statefull drawing API. Make use of that fact.
To make a 2D game using OpenGL, you should use an orthographic projection, like this tutorial shows.
Then its simple to fill the screen with whatever image you want because you aren't dealing with perspective.
However, IF you were to insist on doing things the way you say, then you'd have to gluProject the 4 corners of your screen using the current modelview matrix and then draw a quad in 3D space with those corners. Even with this method, it is likely that the quad might not cover the entire screen sometimes due to floating point errors.