I'm making a small game in XNA. I have a camera up in the air 20 pixels on the y axis. Below it I have a grid of tiles that are 100x100. Right now what I'm trying to do is have a 3D object move with the mouse along the X and Z axis of the grid. I'm using viewport.unproject to convert the 2D screen coordinates to 3D ones, but whatever I try it doesn't seem to be quite right. At the moment I have this:
Vector3 V1 = graphicsDevice.Viewport.Unproject(new Vector3(mouse.X, mouse.Y, 0f), camera.Projection, camera.View, camera.World);
If I use this then it moves, but only by a tiny amount. I've tried replacing the Z axis with a 1 and but then it moves a drastic amount (I understand why, just not really sure how to fix it).
I've tried various other methods such as having 2 vectors, 1 with a 0 Z and 1 with a 1 on the Z and then subtracting them/normalizing them but that wasn't it either.
The closest I got was multiplying the result by the amount it's zoomed, but it wasn't perfect and was slightly offsetting and would go crazy whenever I scrolled the screen so I figured that was the wrong approach too.
Any help would be greatly appreciated, thanks.
Related
I'm working on an RTS game with some pretty extensive UI, so I moved the main camera's output to a quad which only makes up about half the screen, and I'm blitting some UI effects over the rest. My current way of interacting with the game uses unity's Input.mousePosition. When I moved the camera's feed to the quad, obviously those pixel coordinates were distorted, so I fixed them like this:
mapMousePos = (Input.mousePosition * mscaleCorr - mapCorrection * mscaleCorr);
mapCorrection being the pixel offset of the smaller feed, and mscaleCorr being a magic number that got through trial and error — a temporary fix.
Point is, now I'm realizing that running this game at a different resolution will almost certainly break these magic numbers.
What I want mapMousePos to be is what Input.mousePosition was before I moved the gameplay to the small quad - going from (0,0) in the bottom left of the quad to the screen (width, height) in the top right of the quad. This is just so it works with screenToWorld point really nicely on my gameplay camera.
I have the camera-feed quad parented to a full-screen quad, and tried using their relative positions to apply the necessary transformations, but it didn't work, I'm guessing because it's a pixel problem.
I've dug around the docs for a solution using Camera's builtin worldToScreenPoint function, without any luck. I'm sure I'll bump into a fix eventually, but would greatly appreciate any pointers.
Here's what I've come up with; it's stupid, but it works.
I've placed objects at the bottom left and top right of the quad, stored in code as bL and tR.
Then I convert the mousePosition to a worldPosition using ScreenToWorldPoint(), remap it by subtracting the bottom left position, and get it as a percentage across the screen by dividing it by the delta to the top right. Multiply the percentage by the pixel dimensions of the gameplay camera, and voila.
In code, this:
Vector3 wPos = finalcam.ScreenToWorldPoint(Input.mousePosition);
wPos -= bL.transform.position;
mapMousePos = new Vector2(Mathf.Abs(wPos.x), Mathf.Abs(wPos.y));
mapMousePos = new Vector2(
mapMousePos.x / (tR.transform.position.x -bL.transform.position.x),
mapMousePos.y / (tR.transform.position.y - bL.transform.position.y));
mapMousePos = new Vector2(mapMousePos.x * Camera.main.pixelWidth, mapMousePos.y * Camera.main.pixelHeight);
Again, it's dumb, but it seems to work. I'm leaving this up in case anybody knows a cleaner method.
I have a Vector 3 of how many blocks in a grid a piece is along each axis. For example if one of these vectors was ( 1, 2, 1 ) it would be 1 block long on the x-axis, 2 blocks long on the y-axis, and one block long on the z-axis. I also have a Vector 3 of angles that denote rotations along each axis. For example if one of these vectors was ( 90, 180, 0 ) the piece would be rotated by 90 degrees around the x-axis, 180 degrees around the y-axis, and 0 degrees around the z-axis. What I can't figure out is how to rotate the dimensions of a piece by its vector of rotation angles so i know what points in space its occupying.
public class Block
{
private Vector3 localOrientation;
private Vector3 dimensions;
public Vector3 GetRotatedDimensions()
{
//your implementation here
}
}
If I understand correctly, there is something fundamentally wrong with your question. There can be no "rotated dimensions". Let's use a rectangle to demonstrate this. (I didn't undestand correctly)
Suppose there's this initial rectangle:
and you rotate it. This is what you get:
Using a single Vector2, you can't differentiate a "rotated x*y rectangle" from a "initial (x')*(y') rectangle". To sufficiently describe the position of a rectangle, you need to keep the size AND the rotation in your block-describing variable.
Is x' and y' what you wanted to know? I doubt it. Oh, you do? Great!
In 3 dimensions, I would define what you're looking for as
The minimum dimensions of a rectangular box that
1. has its faces parallel to the XY, XZ and YZ planes and
2. contains another rectangular box of known dimensions and orientation.
There are possibly more elegant solutions, but I'd brute force it like this:
Make 8 Vector3 objects (one for each vertex of your block),
Rotate all of them around the x-axis.
Rotate them (the new ones you got from "2") around the y-axis.
Rotate them (the new ones you got from "3") around the z-axis.
Find the min and max values of the x, y and z coordinates among all your points.
Your new dimensions would be (x_max-x_min), (y_max-y_min), (z_max-z_min).
I'm not 100% sure about this though, so make sure you verify the results!
I am making a 3d game where i have to do some terrain generation to have an infinite and random level.
The terrain has to be ondular with ups and downs, like this:
https://imgur.com/a/prTuRl0
i used perlin noise to generate a terrain, with multiple planes in a queue, updating them as the player goes forward(i have the Z position of the last plane, so when i update when i peek from the queue the first in line, dequeue it and enqueue it again with new vertices, through mathf.PerlinNoise(0, zPosition))
the reason why i have a 0 in the first parameter is so that the planes are uniform in the x axis
i'd like to have a gameplay similar to dunes but in 3d, where the player controls a ball, clicking while grounded gives it speed, clicking while on air brings it down, and he can get score and streaks by smoothly going through downhills or die if he crashes against an uphill
problem:
https://imgur.com/a/sdZp1qQ
perlin noise isn't always up and down as i need it to be, sometimes it has those curves in the middle that make the ball jump and have weird not desired movement... appreciate any help on this subject.
I fixed this problem by using the Normal Distribution Function to generate the waves, by messing with the parameters, i can add width or height to my wave
Using 5 planes per "wave" (each wave is a gameobject array of 5 planes), and simply going through the vertexes in the x axis, changing all their heights to the same value calculated in the function, incrementing x and doing it again
I have some code that I wrote that works, but I feel it could be better and wanted to get some feedback.
The goal I had is to have a Sprite Scale up and back down in a timely fashion when a button is pushed so that it gives the illusion of jumping in a "Top Down" view of the game. Like the character is jumping off the screen. I already know how to draw scaled images I'm more interested in the logic of the timing aspect.
This works, just not sure it's the best. Thought maybe there was some equation, a math friend told me maybe a linear equation or like a parabola or second order equation. Not being great with math.
Anyway.
Class Properties
private double _jumpingStartedAt;
private double _totalJumpTimeInSeconds = 0.7;
private double _totalJumpFrames = 14;
private double _timeSinceLastScale;
private double _jumpingHalfWayAt;
When button is pushed for the first time I start the "Jump Logic". This runs once per jump. My thought was that I'd mark the "start" time and determine the "halfway" time by the totalJumpTimeInSeconds.
_jumpingStartedAt = gameTime.TotalGameTime.TotalSeconds;
_jumpingHalfWayAt = _jumpingStartedAt + MillisecondsBetweenFrame() * (_totalJumpFrames / 2);
And then this is run on each Update() until my "jump" is complete or isJumping = false. The logic here is that I would scale up every 1 "frame" until half way point then scale back down.
_timeSinceLastScale += gameTime.ElapsedGameTime.TotalSeconds;
if (_timeSinceLastScale > MillisecondsBetweenFrame() && gameTime.TotalGameTime.TotalSeconds < _jumpingHalfWayAt)
{
Scale += 0.2f;
_timeSinceLastScale = 0;
}
else if (gameTime.TotalGameTime.TotalSeconds > _jumpingHalfWayAt)
{
Scale -= 0.2f;
if (Scale < 1.0) Scale = 1; //probably don't need this was worried if it went passed 0
if (Scale == 1.0) _isJumping = false;
}
private double SecondsBetweenFrame()
{
return _totalJumpTimeInSeconds / this._totalJumpFrames;
}
Now this works, but seems a little convoluted to me.
Stretching image when jumping - side view
Yeah, it's pretty complicated, what you created.
I assume your sprite is also moving up and down when jumping. That you have some sort of Vector2 velocity, which you change by dv = gravityAcceleration * dt in every update, and so you change Vector2 position by dp = velocity * dt. If so, I would rather use my velocity.Y value to calculate how the sprite should stretch. I think it's more natural. And your code will become much more simple.
Here's an image to describe better what I mean:
However, you can probably face the other problem here: just at the beginning of the jump your sprite will suddenly get high velocity, when still being near the ground, which can cause it to cross through the floor for a moment. To prevent that you can artificially move your sprite upwards by the smallest needed value for the time of jump. The problem is described by the image below:
As you can very well see, the first stretched ball moved upwards a little bit, but not enough. You have to calculate difference between sizes before and after stretching and then move your sprite up by that distance.
If you do it like that, your Update should shorten to just a few lines. I believe you can do simple calculations on your own.
Easier approach
...Unless you'd rather like your sprite behave like you want. Then you could modify scale according to your Y position:
if (KeyboardState.IsKeyDown(Keys.Space))
{
isJumping = true;
jumpStartPosition = Position;
}
if (!isJumping) Scale = 1f;
else
{
Scale = StretchFactor * (Position.Y - jumpStartPosition.Y);
}
where:
- isJumping is a bool,
- jumpStartPosition is a Vector2,
- Position is a Vector2 property of your sprite,
- StretchFactor is a float property of your sprite telling how much does it stretch.
And you also need to have end-of-jump condition - for example when the sprite's Position.Y becomes smaller than the jumpStartPosition.Y. But generally this solution (as well as yours) has one disadvantage - there will be problems, if you will want to start jump from one height, and end it on another:
so I would rather recommend my first solution. There you can make stop-jump condition by collision check.
Stretching image when jumping - top-down view
Bummer. Since originally it wasn't specified that it is a top-down game, like those first GTA's, I really misunderstood the question, so the answer doesn't fit much. So the answer goes now.
If you wan't it to be realistic, you should use some basic principles of perspective. As we look at the character jumping from the top, it goes closer to us, so it's image grows. Why's that? Look at the pic below.
There are two things, that are needed for perspective to work: the center of perspective and the screen. The center of perspective is the point, where all "rays" are crossing. "Ray" is a line from the any point in the world to the center of our eye. Now the screen is the plane, where image of 3d world is being created. The points of the real world are being cast into screen along their rays. Of course your game is pseudo-3d, but it shouldn't matter in that case.
When z grows, sprite comes closer to the center of perspective. If you imagine ray from the center of perspective to the edge of the sprite, the angle of ray changes, as it's distance to the center of perspective becomes lesser. And the change of angle makes the point's image on the screen moving. That's why image grows, or becomes smaller.
Now we can wonder: ok, how now put this into numbers? Look at the picture below:
I deliberately translated whole world by -C so the z coord of the center of perspective could become 0. That makes calculations simplier. What are we trying to find, is the x' - coord of the point on the screen. Let the Z* = |z - C|. If we look at this picture it becomes clear, that we can find what we need by pretty simple proportion:
Using the same method you can calculate y'. If your character is always at the center of the screen, all that you need will be x'/x = y'/y = S, i.e. your scale. That's because x in this scenario is, in fact, the half-width of the sprite, and y is the half-height. However, if your character will be able to move freely around the screen, you may want to scale & translate it, so it would be more natural:
The white square is the on-the-ground sprite, the gray square is the jumping sprite. In this case you will have to know l (left), r (right), t (top) and b (bottom) coords of the sprite's boundaries (top-bottom means Y-axis, not Z-axis). Then using the same proportion you can get l', r', t' and b' - boundaries of the sprite's image on screen. From this data you should be able to calculate both scale and translation.
Note: L is the parameter of our calculation which you have to choose yourself. Assuming, that the screen has constant width Ws and height Hs, L strictly corresponds with FOV (field of view). You can acquire it also using proportions. So L = (cos(FOV/2) * Ws)/2. I would recommend FOV = 60 deg. If you will make FOV too big, you may face the fisheye problem.
Using OpenTK, I've created a window (800x600) with a vertical FOV of 90°.
I want to make a 2D game with a background image that fits on the whole screen.
What I want is the plane at a variable z coordinate as a RectangleF.
Currently my code is:
var y = (float)(Math.Tan(Math.PI / 4) * z);
return new RectangleF(aspectRatio * -y, -y, 2 * aspectRatio * y, 2 * y);
The rectangle calculated by this is always a little to small, this effect seems to decrease with z increasing.
Hoping someone will find my mistake.
I want to make a 2D game with a background image that fits on the whole screen.
Then don't bother with perspective calculations. Just switch to an orthographic projection for drawing the background, disabling depth writes. Then switch to a perspective projection for the rest.
OpenGL is not a scene graph, it's a statefull drawing API. Make use of that fact.
To make a 2D game using OpenGL, you should use an orthographic projection, like this tutorial shows.
Then its simple to fill the screen with whatever image you want because you aren't dealing with perspective.
However, IF you were to insist on doing things the way you say, then you'd have to gluProject the 4 corners of your screen using the current modelview matrix and then draw a quad in 3D space with those corners. Even with this method, it is likely that the quad might not cover the entire screen sometimes due to floating point errors.