It appears that my if statement is not working; it is, from what I gather with debug messageboxes that I placed earlier throughout the code to report variables etc., simply not modifying the variable "pos" in the if block, but the if block is definitely being executed. It's hard to explain.
I'm building a little game with cars on a street, and here, I try to spawn new cars and assign them a starting position (or modify their position) based on the lane of the street they're in. This is not production code, it's just me roughing out the basic idea.
for (int i = 0; i < carlane.Count; i++)
{
float lane = carlane.ElementAt(i);
if (lane == 1)
{
if (carpos.Count <= i)
{
pos = new Vector2(screenWidth - 20, (screenHeight / 2) - (8 * screenHeight / 200));
}
else
{
pos = new Vector2(carpos[i].X - 2, carpos[i].Y);
}
rotation = 1.5f * (float)Math.PI;
}
else if (lane == 2)
{
if (carpos.Count <= i)
{
pos = new Vector2(screenWidth - 20, (screenHeight / 2) - (8 * screenHeight / 200));
}
else
{
pos = new Vector2(carpos[i].X - 2, carpos[i].Y);
}
rotation = 1.5f * (float)Math.PI;
}
}
spriteBatch.Draw(car, pos, null, Color.White, rotation, origin, (lane - 1) * (float)Math.PI * 0.5f, SpriteEffects.None, 0f);
if (carpos.Count > i)
{
carpos[i] = (pos);
}
else
{
carpos.Add(pos);
}
And so, when lane is set to 1, nothing happens. Cars spawn, but don't appear. When lane is set to 2, I purposefully used the same code within the if-block as when lane is equal to 1, and the cars spawn and drive along the lane correctly. Something is wrong with the code, when lane = 1, and I don't know what it is.
My computer runs Windows 7 Home Premium 64 bit, and I'm using C# 2010 express edition with XNA game studio 4.0.
Please help?
When lane equals 1, the scale (lane - 1) * (float)Math.PI * 0.5f equals 0, which means that car is scaled to nothing - thus nothing appears on the screen.
When lane is zero, (lane - 1) * (float)Math.PI * 0.5f is zero. You're Drawing with scale argument zero, which draws nothing.
The documentation:
public void Draw (
Texture2D texture,
Vector2 position,
Nullable<Rectangle> sourceRectangle,
Color color,
float rotation,
Vector2 origin,
float scale,
SpriteEffects effects,
float layerDepth
)
Your code:
spriteBatch.Draw(
car,
pos,
null,
Color.White,
rotation,
origin,
(lane - 1) * (float)Math.PI * 0.5f,
SpriteEffects.None,
0f
);
Looks like the scale to me.
You should only change the pos variable depending on the lane, not the size of the sprite (you were changing the scale of the sprite, just set that to 1.0f).
spriteBatch.Draw(car, pos, null, Color.White, rotation,
origin, 1.0f, SpriteEffects.None, 0f);
Related
I'm in the process of setting up a relatively simple voxel-based world for a game. The high level idea is to first generate voxel locations following a fibonacci grid, then rotate the cubes such that the outer surface of the fibonacci grid resembles a sphere, and finally size the cubes such that they roughly cover the surface of the sphere (overlap is fine). See below the code for generating the voxels along the fibonacci grid:
public static Voxel[] CreateInitialVoxels(int numberOfPoints, int radius)
{
float goldenRatio = (1 + Mathf.Sqrt(5)) / 2;
Voxel[] voxels = new Voxel[numberOfPoints];
for (int i = 0; i < numberOfPoints; i++)
{
float n = i - numberOfPoints / 2; // Center at zero
float theta = 2 * Mathf.PI * n / goldenRatio;
float phi = (Mathf.PI / 2) + Mathf.Asin(2 * n / numberOfPoints);
voxels[i] = new Voxel(new Location(theta, phi, radius));
}
return voxels;
}
This generates a sphere that looks roughly like a staircase
So, my current approach to get this looking a bit more spherical is to basically rotate each cube in each pair of axes, then combine all of the rotations:
private void DrawVoxel(Voxel voxel, GameObject voxelContainer)
{
GameObject voxelObject = Instantiate<GameObject>(GetVoxelPrefab());
voxelObject.transform.position = voxel.location.cartesianCoordinates;
voxelObject.transform.parent = voxelContainer.transform;
Vector3 norm = voxel.location.cartesianCoordinates.normalized;
float xyRotationDegree = Mathf.Atan(norm.y / norm.x) * (180 / Mathf.PI);
float zxRotationDegree = Mathf.Atan(norm.z / norm.x) * (180 / Mathf.PI);
float yzRotationDegree = Mathf.Atan(norm.z / norm.y) * (180 / Mathf.PI);
Quaternion xyRotation = Quaternion.AngleAxis(xyRotationDegree, new Vector3(0, 0, 1));
Quaternion zxRotation = Quaternion.AngleAxis(zxRotationDegree, new Vector3(0, 1, 0));
Quaternion yzRotation = Quaternion.AngleAxis(yzRotationDegree, new Vector3(1, 0, 0));
voxelObject.transform.rotation = zxRotation * yzRotation * xyRotation;
}
The primary thing that I am getting caught on is that each of these rotations seems to work fine for me in isolation, but when combining them things tend to go a bit haywire (pictures below) I'm not sure exactly what the issue is. My best guess is that I've made some sign/rotation mismatch in my rotations so they don't combine right. I can get two working, but never all three together.
Above are the pictures of one and two successful rotations, followed by the error mode when I attempt to combine them. Any help either on telling me that the approach I'm following is too convoluted, or helping me understand what the right way to combine these rotations would be would be very helpful. Cartesian coordinate conversion below for reference.
[System.Serializable]
public struct Location
{
public float theta, phi, r;
public Vector3 polarCoordinates;
public float x, y, z;
public Vector3 cartesianCoordinates;
public Location(float theta, float phi, float r)
{
this.theta = theta;
this.phi = phi;
this.r= r;
this.polarCoordinates = new Vector3(theta, phi, r);
this.x = r * Mathf.Sin(phi) * Mathf.Cos(theta);
this.y = r * Mathf.Sin(phi) * Mathf.Sin(theta);
this.z = r * Mathf.Cos(phi);
this.cartesianCoordinates = new Vector3(x, y, z);
}
}
I managed to find a solution to this problem, though it's still not clear to me what the issue with the above code is.
Unity has an extremely handy function called Quaternion.FromToRotation that will generate the appropriate rotation if you simply pass in the appropriate destination vector.
In my case I was able to just do:
voxelObject.transform.rotation = Quaternion.FromToRotation(new Vector3(0, 0, 1), voxel.location.cartesianCoordinates);
I'm trying to generate a circular mesh made up of triangles with a common center at the center of the circle.
The mesh is generated properly, but the UVs are not and I am having some trouble understanding how to add them.
I assumed I would just copy the vertexes' pattern, but it didn't work out.
Here is the function:
private void _MakeMesh(int sides, float radius = 0.5f)
{
m_LiquidMesh.Clear();
float angleStep = 360.0f / (float) sides;
List<Vector3> vertexes = new List<Vector3>();
List<int> triangles = new List<int>();
List<Vector2> uvs = new List<Vector2>();
Quaternion rotation = Quaternion.Euler(0.0f, angleStep, 0.0f);
// Make first triangle.
vertexes.Add(new Vector3(0.0f, 0.0f, 0.0f));
vertexes.Add(new Vector3(radius, 0.0f, 0.0f));
vertexes.Add(rotation * vertexes[1]);
// First UV ??
uvs.Add(new Vector2(0, 0));
uvs.Add(new Vector2(1, 0));
uvs.Add(rotation * uvs[1]);
// Add triangle indices.
triangles.Add(0);
triangles.Add(1);
triangles.Add(2);
for (int i = 0; i < sides - 1; i++)
{
triangles.Add(0);
triangles.Add(vertexes.Count - 1);
triangles.Add(vertexes.Count);
// UV ??
vertexes.Add(rotation * vertexes[vertexes.Count - 1]);
}
m_LiquidMesh.vertices = vertexes.ToArray();
m_LiquidMesh.triangles = triangles.ToArray();
m_LiquidMesh.uv = uvs.ToArray();
m_LiquidMesh.RecalculateNormals();
m_LiquidMesh.RecalculateBounds();
Debug.Log("<color=yellow>Liquid mesh created</color>");
}
How does mapping UV work in a case like this?
Edit: I'm trying to use this circle as an effect of something flowing outwards from the center (think: liquid mesh for a brewing pot)
This is an old post, but maybe someone else will benefit from my solution.
So basically I gave my center point the center of the uv (0.5, 0.5) and then used the used circle formula to give every other point the uv coordinate. But of course I had to remap the cos and sin results from -1..1 to 0..1 and everything is working great.
Vector2[] uv = new Vector2[vertices.Length];
uv[uv.Length - 1] = new Vector2(0.5f, 0.5f);
for (int i = 0; i < uv.Length - 1; i++)
{
float radians = (float) i / (uv.Length - 1) * 2 * Mathf.PI;
uv[i] = new Vector2(Mathf.Cos(radians).Remap(-1f, 1f, 0f, 1f), Mathf.Sin(radians).Remap(-1f, 1f, 0f, 1f));
}
mesh.uv = uv;
Where the remap is an extension like this and it basically take a value in a range and remaps it to another range (in this case from -1..1 to 0..1):
public static float Remap(this float value, float from1, float to1, float from2, float to2) {
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}
I am doing some ascii game, via using OpenTK & Winforms. And stack with Camera movement and view in 2D.
Those code i am using for mouse event and translate positions:
public static Point convertScreenToWorldCoords(int x, int y)
{
int[] viewport = new int[4];
Matrix4 modelViewMatrix, projectionMatrix;
GL.GetFloat(GetPName.ModelviewMatrix, out modelViewMatrix);
GL.GetFloat(GetPName.ProjectionMatrix, out projectionMatrix);
GL.GetInteger(GetPName.Viewport, viewport);
Vector2 mouse;
mouse.X = x;
mouse.Y = y;
Vector4 vector = UnProject(ref projectionMatrix, modelViewMatrix, new Size(viewport[2], viewport[3]), mouse);
Point coords = new Point((int)vector.X, (int)vector.Y);
return coords;
}
public static Vector4 UnProject(ref Matrix4 projection, Matrix4 view, Size viewport, Vector2 mouse)
{
Vector4 vec;
vec.X = 2.0f * mouse.X / (float)viewport.Width - 1;
vec.Y = 2.0f * mouse.Y / (float)viewport.Height - 1;
vec.Z = 0;
vec.W = 1.0f;
Matrix4 viewInv = Matrix4.Invert(view);
Matrix4 projInv = Matrix4.Invert(projection);
Vector4.Transform(ref vec, ref projInv, out vec);
Vector4.Transform(ref vec, ref viewInv, out vec);
if (vec.W > float.Epsilon || vec.W < float.Epsilon)
{
vec.X /= vec.W;
vec.Y /= vec.W;
vec.Z /= vec.W;
}
return vec;
}
//on mouse click event
Control control = sender as Control;
Point worldCoords = convertScreenToWorldCoords(e.X, control.ClientRectangle.Height - e.Y);
playerX = (int)Math.Floor((double)worldCoords.X / 9d);
playerY = (int)Math.Floor((double)worldCoords.Y / 9d);
And those code will setup my Projection, but, something wrong here...
//Set Projection
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(0, Width * charWidth * scale, Height * charHeight * scale, 0, -1, 1);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
GL.Translate(playerX, playerY, 0);
Well, my problem is GL.Translate. In this case, they not focused on playerX,Y and movement of "camera" seems reversed. If i am put GL.Translate(-playerX, -playerY, 0); - They seems moves correct, but anyways View seems not focused on player Object (player object should be always on center position of view, typical top-down view camera). But I dont know how to setting up them correctly. My experiments, multiple, devide, etc. with my X,Y pos - does not give me correct view. How it should be in this case?
When I create particle effects, they all have the same pattern. They are rotated, but they all have the same pattern and same colored particles. See picture:
This is how a new ParticleEffect gets created:
ParticleEffect p = new ParticleEffect(textures, Vector2.Zero, destination, speed);
Where textures is aTexture2D list, VectorZero is the starting location, and so on.
Whenever a new ParticleEffect gets created it gets added to the ParticleList list, which later loops through all the items and calls update and draw for each effect inside.
Here is where the particles are randomised:
private Particle GenerateNewParticle()
{
Random random = new Random();
Texture2D texture = textures[random.Next(textures.Count)];
Vector2 position = EmitterLocation;
Vector2 velocity = new Vector2(
1f * (float)(random.NextDouble() * 2 - 1),
1f * (float)(random.NextDouble() * 2 - 1));
float angle = 0;
float angularVelocity = 0.1f * (float)(random.NextDouble() * 2 - 1);
Color color = new Color(
(float)random.NextDouble(),
(float)random.NextDouble(),
(float)random.NextDouble());
float size = (float)random.NextDouble();
int ttl = 20 + random.Next(40);
return new Particle(texture, position, velocity, angle, angularVelocity, color, size, ttl);
}
A bunch of randoms there, but each effect still comes out the same.
Comment if you want to see more code.
Edit:
Here is how a particle gets drawn:
public void Draw(SpriteBatch spriteBatch)
{
Rectangle sourceRectangle = new Rectangle(0, 0, Texture.Width, Texture.Height);
Vector2 origin = new Vector2(Texture.Width / 2, Texture.Height / 2);
spriteBatch.Draw(Texture, Position, sourceRectangle, Color,
Angle, origin, Size, SpriteEffects.None, 0f);
}
By default, Random instances are initiated with the current time as seed, that means the same sequence of numbers will be re-generated if you create instances at the same time - do not create new instances, reuse an existing instance to get more "randomized" behavior (in your case e.g. use a static Random instance).
I've been trying to combine these two samples from David Amador:
http://www.david-amador.com/2010/03/xna-2d-independent-resolution-rendering/
http://www.david-amador.com/2009/10/xna-camera-2d-with-zoom-and-rotation/
Everything is working fine except I'm having some difficulty getting the mouse coordinates. I was able to get them for each individual sample, but my math for taking both into account seems to be wrong.
The mouse coordinates ARE correct if my virtual resolution and normal resolution are the same. It's when I do something like Resolution.SetVirtualResolution(1920, 1080)
and Resolution.SetResolution(1280, 720, false) when the coordinates slowly get out of sync as I move the camera.
Here is the code:
public static Vector2 MousePositionCamera(Camera camera)
{
float MouseWorldX = (Mouse.GetState().X - Resolution.VirtualWidth * 0.5f + (camera.position.X) * (float)Math.Pow(camera.Zoom, 1)) /
(float)Math.Pow(camera.Zoom, 1);
float MouseWorldY = ((Mouse.GetState().Y - Resolution.VirtualHeight * 0.5f + (camera.position.Y) * (float)Math.Pow(camera.Zoom, 1))) /
(float)Math.Pow(camera.Zoom, 1);
Vector2 mousePosition = new Vector2(MouseWorldX, MouseWorldY);
Vector2 virtualViewport = new Vector2(Resolution.VirtualViewportX, Resolution.VirtualViewportY);
mousePosition = Vector2.Transform(mousePosition - virtualViewport, Matrix.Invert(Resolution.getTransformationMatrix()));
return mousePosition;
}
In resolution I added this:
virtualViewportX = (_Device.PreferredBackBufferWidth / 2) - (width / 2);
virtualViewportY = (_Device.PreferredBackBufferHeight / 2) - (height / 2);
Everything else is the same as the sample code. Thanks in advance!
Thanks to David Gouveia I was able to identify the problem... My camera matrix was using the wrong math.
I'm going to post all of my code with the hopes of helping someone who is trying to do something similar.
Camera transformation matrix:
public Matrix GetTransformMatrix()
{
transform = Matrix.CreateTranslation(new Vector3(-position.X, -position.Y, 0)) * Matrix.CreateRotationZ(rotation) *
Matrix.CreateScale(new Vector3(Zoom, Zoom, 1)) * Matrix.CreateTranslation(new Vector3(Resolution.VirtualWidth
* 0.5f, Resolution.VirtualHeight * 0.5f, 0));
return transform;
}
That will also center the camera. Here's how you get the mouse coordinates combining both the Resolution class and camera class:
public static Vector2 MousePositionCamera(Camera camera)
{
Vector2 mousePosition;
mousePosition.X = Mouse.GetState().X;
mousePosition.Y = Mouse.GetState().Y;
//Adjust for resolutions like 800 x 600 that are letter boxed on the Y:
mousePosition.Y -= Resolution.VirtualViewportY;
Vector2 screenPosition = Vector2.Transform(mousePosition, Matrix.Invert(Resolution.getTransformationMatrix()));
Vector2 worldPosition = Vector2.Transform(screenPosition, Matrix.Invert(camera.GetTransformMatrix()));
return worldPosition;
}
Combined with all of the other code I posted/mentioned this should be all you need to achieve resolution independence and an awesome camera!