Strange rendering using SlimDX (holes in faces) - c#

I have a problem just with rendering a simple rotating cube with SlimDX (DirectX 11). Cube is rotating and these are the images i get:
I use this vertex shader:
float4x4 WorldViewProj : register(c0);
float4 VShader(float4 position : POSITION) : SV_POSITION
{
return mul(float4(position.xyz, 1.0), WorldViewProj);
}
And this is how i prepare WorldViewProj:
ProjectionMatrix = Matrix.PerspectiveFovLH((float)(Math.PI / 4.0), 1f, -1, 20);
ViewMatrix = Matrix.LookAtLH(new Vector3(0f, 0, ypos), new Vector3(0f, 0, 10), new Vector3(0f, 1f, 0f));
Matrix positionMatrix = Matrix.Translation(position);
Matrix rotationMatrix = Matrix.RotationYawPitchRoll(rotation.X, rotation.Y, rotation.Z);
Matrix scaleMatrix = Matrix.Scaling(scale);
Matrix worldMatrix = rotationMatrix * scaleMatrix * positionMatrix;
worldMatrix = Matrix.Transpose(worldMatrix);
Matrix WorldViewProj = ViewMatrix * ProjectionMatrix * worldMatrix;
First of all i saw this question: Incorrect Clipping and 3D Projection when using SlimDX
And, probably, there's something wrong with matrices (since i don't transpose neither view nor projection but transpose world - in opposite case nothing is rendered). But even if i set both view and projection matrices to identity matrix the "hole" is still there.
Please help and thanks in advance!

The problem is found. It was just a wrong order of vertices in an index buffer. Shame on me:) Thanks everybody for your help!

Related

c# monogame 3d render billboard always facing camera with ability to rotation along X axis

I'm programming a 3D game in C# monogame, and I would like the particles to always face the camera. The code below works for that. I only send the particles texture, size and rotation to HLSL, which allows me to calculate the corners on the GPU.
This is the code that works without the particles rotating
output.PositionWS is the vertex world position
CameraPosWS is the camera world position
size.x is just the X size of billboard
size.y is just the Y size of the billboard
output.PositionWS = mul(float4(input.inPositionOS, 1), World).xyz;
float3 ParticleToCamera = output.PositionWS - CameraPosWS;
float3 ParticleUp = float3(0, 0, 1);
float3 ParticleRight = normalize(cross(ParticleToCamera, ParticleUp));
finalPosition.xyz += ((input.inTexCoords.x - 0.5f) * size.x) * ParticleRight;
finalPosition.xyz += ((0.5f - input.inTexCoords.y) * size.y) * ParticleUp;
But I would like the particle to now rotate while still always facing the camera. I was thinking of using a matrix to transform the calculated corners of the billboard to rotate the billboard, but I haven't been able to get it to work.
This is the code with billboard rotation along the X axis which doesn't work
rotation is a rotation in radians
float3 ParticleToCamera = output.PositionWS - CameraPosWS;
float3 ParticleUp = float3(0, 0, 1);
float3 ParticleRight = normalize(cross(ParticleToCamera, ParticleUp));
float3x3 RotationMatrix = float3x3(
ParticleRight,
ParticleUp,
ParticleToCamera
);
float3 rotatedVertex = mul(float3(input.inPositionOS.xy, 0), RotationMatrix);
// Only apply rotation on the X axis
rotatedVertex.x = rotatedVertex.x * cos(rotation);
finalPosition.xyz += ((input.inTexCoords.x - 0.5f) * size.x) * rotatedVertex.x;
finalPosition.xyz += ((0.5f - input.inTexCoords.y) * size.y) * rotatedVertex.y;
You are overthinking this problem. The point of the Vertex shader is to transform from and object/world coordinate system into screen based coordinate system.
The orientation is already screen aligned.
I will assume that scaling is accounted for in the size variable. Otherwise, scale it using ParticleToCamera.z.
Please note the following code is incomplete without the initial finalPosition definition: I have assumed it to be PositionWS.
output.PositionWS = mul(float4(input.inPositionOS, 1), World).xyz;
float3 ParticleToCamera = output.PositionWS - CameraPosWS;
float3 ParticleUp = float3(0, 0, 1);
float3 ParticleRight = float3(1, 0, 0);
finalPosition.xyz += ((input.inTexCoords.x - 0.5f) * size.x) * ParticleRight;
finalPosition.xyz += ((0.5f - input.inTexCoords.y) * size.y) * ParticleUp;

Unity - Determining UVs for a circular plane mesh generated by code

I'm trying to generate a circular mesh made up of triangles with a common center at the center of the circle.
The mesh is generated properly, but the UVs are not and I am having some trouble understanding how to add them.
I assumed I would just copy the vertexes' pattern, but it didn't work out.
Here is the function:
private void _MakeMesh(int sides, float radius = 0.5f)
{
m_LiquidMesh.Clear();
float angleStep = 360.0f / (float) sides;
List<Vector3> vertexes = new List<Vector3>();
List<int> triangles = new List<int>();
List<Vector2> uvs = new List<Vector2>();
Quaternion rotation = Quaternion.Euler(0.0f, angleStep, 0.0f);
// Make first triangle.
vertexes.Add(new Vector3(0.0f, 0.0f, 0.0f));
vertexes.Add(new Vector3(radius, 0.0f, 0.0f));
vertexes.Add(rotation * vertexes[1]);
// First UV ??
uvs.Add(new Vector2(0, 0));
uvs.Add(new Vector2(1, 0));
uvs.Add(rotation * uvs[1]);
// Add triangle indices.
triangles.Add(0);
triangles.Add(1);
triangles.Add(2);
for (int i = 0; i < sides - 1; i++)
{
triangles.Add(0);
triangles.Add(vertexes.Count - 1);
triangles.Add(vertexes.Count);
// UV ??
vertexes.Add(rotation * vertexes[vertexes.Count - 1]);
}
m_LiquidMesh.vertices = vertexes.ToArray();
m_LiquidMesh.triangles = triangles.ToArray();
m_LiquidMesh.uv = uvs.ToArray();
m_LiquidMesh.RecalculateNormals();
m_LiquidMesh.RecalculateBounds();
Debug.Log("<color=yellow>Liquid mesh created</color>");
}
How does mapping UV work in a case like this?
Edit: I'm trying to use this circle as an effect of something flowing outwards from the center (think: liquid mesh for a brewing pot)
This is an old post, but maybe someone else will benefit from my solution.
So basically I gave my center point the center of the uv (0.5, 0.5) and then used the used circle formula to give every other point the uv coordinate. But of course I had to remap the cos and sin results from -1..1 to 0..1 and everything is working great.
Vector2[] uv = new Vector2[vertices.Length];
uv[uv.Length - 1] = new Vector2(0.5f, 0.5f);
for (int i = 0; i < uv.Length - 1; i++)
{
float radians = (float) i / (uv.Length - 1) * 2 * Mathf.PI;
uv[i] = new Vector2(Mathf.Cos(radians).Remap(-1f, 1f, 0f, 1f), Mathf.Sin(radians).Remap(-1f, 1f, 0f, 1f));
}
mesh.uv = uv;
Where the remap is an extension like this and it basically take a value in a range and remaps it to another range (in this case from -1..1 to 0..1):
public static float Remap(this float value, float from1, float to1, float from2, float to2) {
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}

XNA - how to draw an object farther

Here's how I draw some shape defined by vertices not shown here.
Vector3 position = (5,5,1);
Matrix world = Matrix.CreateTranslation(position);
BasicEffect basicEffect = new BasicEffect(graphicsDevice);
Matrix view = Matrix.CreateLookAt(new Vector3(0, 0, -20), new Vector3(0, 0, 100), Vector3.Up);
Matrix projection = Matrix.CreatePerspectiveFieldOfView(MathHelper.PiOver4,
graphics.Viewport.AspectRatio,
1.0f,
100);
// Set BasicEffect parameters.
basicEffect.World = world;
basicEffect.View = view;
basicEffect.Projection = projection;
//....draw some shape with basicEffect
I would like to paint the same shape only farther away so that its center stays in the same (x,y) pixel on screen but it is overall smaller as it's more distant.
I've tried scaling the position vector but had no success with it:
position .Z *= 2;
position .X *= 2;
position .Y *= 2;
What's the right way to do this?
Think about it geometrically: moving the object away from the camera means moving it along a line defined by two points: the camera's position and the object's position.
Now it's easy!
1) Find the vector object-to-camera, i.e.
Vector3 direction = objectPosition - cameraPosition;
2) Move the object alongside that vector by a certain amount, that is:
2.1) Normalize the direction
direction.Normalize();
2.2) Move the object by an amount x in that direction
objectPosition += direction * x;
And there you have it.

Applying modeling matrix to view matrix = failure

I've got a problem with moving and rotating objects in OpenGL. I'm using C# and OpenTK (Mono), but I guess the problem is with me not understanding the OpenGL part, so you might be able to help me even if you don't know anything about C# / OpenTK.
I'm reading the OpenGL SuperBible (latest edition) and I tried to rewrite the GLFrame in C#. Here is the part I've already rewritten:
public class GameObject
{
protected Vector3 vLocation;
public Vector3 vUp;
protected Vector3 vForward;
public GameObject(float x, float y, float z)
{
vLocation = new Vector3(x, y, z);
vUp = Vector3.UnitY;
vForward = Vector3.UnitZ;
}
public Matrix4 GetMatrix(bool rotationOnly = false)
{
Matrix4 matrix;
Vector3 vXAxis;
Vector3.Cross(ref vUp, ref vForward, out vXAxis);
matrix = new Matrix4();
matrix.Row0 = new Vector4(vXAxis.X, vUp.X, vForward.X, vLocation.X);
matrix.Row1 = new Vector4(vXAxis.Y, vUp.Y, vForward.Y, vLocation.Y);
matrix.Row2 = new Vector4(vXAxis.Z, vUp.Z, vForward.Z, vLocation.Z);
matrix.Row3 = new Vector4(0.0f, 0.0f, 0.0f, 1.0f);
return matrix;
}
public void Move(float x, float y, float z)
{
vLocation = new Vector3(x, y, z);
}
public void RotateLocalZ(float angle)
{
Matrix4 rotMat;
// Just Rotate around the up vector
// Create a rotation matrix around my Up (Y) vector
rotMat = Matrix4.CreateFromAxisAngle(vForward, angle);
Vector3 newVect;
// Rotate forward pointing vector (inlined 3x3 transform)
newVect.X = rotMat.M11 * vUp.X + rotMat.M12 * vUp.Y + rotMat.M13 * vUp.Z;
newVect.Y = rotMat.M21 * vUp.X + rotMat.M22 * vUp.Y + rotMat.M23 * vUp.Z;
newVect.Z = rotMat.M31 * vUp.X + rotMat.M32 * vUp.Y + rotMat.M33 * vUp.Z;
vUp = newVect;
}
}
So I create a new GameObject (GLFrame) on some random coordinates: GameObject go = new GameObject(0, 0, 5); and rotate it a bit: go.RotateLocalZ(rotZ);. Then I get the matrix using Matrix4 matrix = go.GetMatrix(); and render frame (first, I set the viewing matrix and then I multiply it with modeling matrix)
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
this.Title = "FPS: " + (1 / e.Time).ToString("0.0");
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
Matrix4 modelmatrix = go.GetMatrix();
Matrix4 viewmatrix = Matrix4.LookAt(new Vector3(5, 5, -10), Vector3.Zero, Vector3.UnitY);
GL.LoadMatrix(ref viewmatrix);
GL.MultMatrix(ref modelmatrix);
DrawCube(new float[] { 0.5f, 0.4f, 0.5f, 0.8f });
SwapBuffers();
}
The DrawCube(float[] color) is my own method for drawing a cube.
Now the most important part: If I render the frame without the GL.MultMatrix(ref matrix); part, but using GL.Translate() and GL.Rotate(), it works (second screenshot). However, if I don't use these two methods and I pass the modeling matrix directly to OpenGL using GL.MultMatrix(), it draws something strange (first screenshot).
Can you help me and explain me where is the problem? Why does it work using translate and rotate methods, but not with multiplying the view matrix by the modeling matrix?
OpenGL transformation matrices are ordered column wise. You should use the transpose of the matrix you are using.

Rotation in XNA

I have a question concerning model rotation in XNA. The question is - what should I do (which values should be changed and how) to rotate green model in that way (red arrows):
http://img843.imageshack.us/i/question1.jpg/
Code used to draw :
DrawModel(elementD, new Vector3(-1, 0.5f, 0.55f), 0,-90,0);
private void DrawModel(Model model, Vector3 position, float rotXInDeg, float rotYInDeg, float rotZInDeg)
{
float rotX = (float)(rotXInDeg * Math.PI / 180);
float rotY = (float)(rotYInDeg * Math.PI / 180);
float rotZ = (float)(rotZInDeg * Math.PI / 180);
Matrix worldMatrix = Matrix.CreateScale(0.5f, 0.5f, 0.5f) * Matrix.CreateRotationY(rotY) *Matrix.CreateRotationX(rotX)*Matrix.CreateRotationZ(rotZ)* Matrix.CreateTranslation(position);
Matrix[] xwingTransforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(xwingTransforms);
foreach (ModelMesh mesh in model.Meshes)
{
foreach (BasicEffect effect in mesh.Effects)
{
effect.EnableDefaultLighting();
effect.View = cam.viewMatrix;
effect.Projection = cam.projectionMatrix;
effect.World = (xwingTransforms[mesh.ParentBone.Index] * worldMatrix);
}
mesh.Draw();
}
}
So I tried to apply your solution by changing my code slightly:
Matrix worldMatrix = Matrix.CreateScale(0.5f, 0.5f, 0.5f) * Matrix.CreateRotationY(rotY) * Matrix.CreateRotationX(rotX) * Matrix.CreateTranslation(position)
* Matrix.CreateRotationX((float)(45 * Math.PI / 180)) * Matrix.CreateRotationZ(rotZ) * Matrix.CreateRotationX((float)(-45 * Math.PI / 180));
By changing rotZ parameter indeed I was able to rotate the model. However the effect is not what I wanted to achieve http://img225.imageshack.us/i/questionau.jpg/, it changed its position. Is it because of a faulty model or some other mistake? I want the "cylinder" to remain in its position. Do you know how can I do this?
Rotation with rotation matrices are cummulative. So you can probably calculate your rotation matrix by rotating the model 45 degrees "down", then apply your rotation around your wanted axis, then rotating the model 45 degrees "up" again. The product of these three matrices should give you your desired matrix.
Another option is to use a Quaternion to generation the rotation matrix. A Quaternion is basicly an axis and a rotaion around it. Which may be easier to manipulate in this case?

Categories

Resources