Applying modeling matrix to view matrix = failure - c#

I've got a problem with moving and rotating objects in OpenGL. I'm using C# and OpenTK (Mono), but I guess the problem is with me not understanding the OpenGL part, so you might be able to help me even if you don't know anything about C# / OpenTK.
I'm reading the OpenGL SuperBible (latest edition) and I tried to rewrite the GLFrame in C#. Here is the part I've already rewritten:
public class GameObject
{
protected Vector3 vLocation;
public Vector3 vUp;
protected Vector3 vForward;
public GameObject(float x, float y, float z)
{
vLocation = new Vector3(x, y, z);
vUp = Vector3.UnitY;
vForward = Vector3.UnitZ;
}
public Matrix4 GetMatrix(bool rotationOnly = false)
{
Matrix4 matrix;
Vector3 vXAxis;
Vector3.Cross(ref vUp, ref vForward, out vXAxis);
matrix = new Matrix4();
matrix.Row0 = new Vector4(vXAxis.X, vUp.X, vForward.X, vLocation.X);
matrix.Row1 = new Vector4(vXAxis.Y, vUp.Y, vForward.Y, vLocation.Y);
matrix.Row2 = new Vector4(vXAxis.Z, vUp.Z, vForward.Z, vLocation.Z);
matrix.Row3 = new Vector4(0.0f, 0.0f, 0.0f, 1.0f);
return matrix;
}
public void Move(float x, float y, float z)
{
vLocation = new Vector3(x, y, z);
}
public void RotateLocalZ(float angle)
{
Matrix4 rotMat;
// Just Rotate around the up vector
// Create a rotation matrix around my Up (Y) vector
rotMat = Matrix4.CreateFromAxisAngle(vForward, angle);
Vector3 newVect;
// Rotate forward pointing vector (inlined 3x3 transform)
newVect.X = rotMat.M11 * vUp.X + rotMat.M12 * vUp.Y + rotMat.M13 * vUp.Z;
newVect.Y = rotMat.M21 * vUp.X + rotMat.M22 * vUp.Y + rotMat.M23 * vUp.Z;
newVect.Z = rotMat.M31 * vUp.X + rotMat.M32 * vUp.Y + rotMat.M33 * vUp.Z;
vUp = newVect;
}
}
So I create a new GameObject (GLFrame) on some random coordinates: GameObject go = new GameObject(0, 0, 5); and rotate it a bit: go.RotateLocalZ(rotZ);. Then I get the matrix using Matrix4 matrix = go.GetMatrix(); and render frame (first, I set the viewing matrix and then I multiply it with modeling matrix)
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
this.Title = "FPS: " + (1 / e.Time).ToString("0.0");
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
Matrix4 modelmatrix = go.GetMatrix();
Matrix4 viewmatrix = Matrix4.LookAt(new Vector3(5, 5, -10), Vector3.Zero, Vector3.UnitY);
GL.LoadMatrix(ref viewmatrix);
GL.MultMatrix(ref modelmatrix);
DrawCube(new float[] { 0.5f, 0.4f, 0.5f, 0.8f });
SwapBuffers();
}
The DrawCube(float[] color) is my own method for drawing a cube.
Now the most important part: If I render the frame without the GL.MultMatrix(ref matrix); part, but using GL.Translate() and GL.Rotate(), it works (second screenshot). However, if I don't use these two methods and I pass the modeling matrix directly to OpenGL using GL.MultMatrix(), it draws something strange (first screenshot).
Can you help me and explain me where is the problem? Why does it work using translate and rotate methods, but not with multiplying the view matrix by the modeling matrix?

OpenGL transformation matrices are ordered column wise. You should use the transpose of the matrix you are using.

Related

encoding vertex positions into textures, positions dont match and index is always zero

I am trying to make a tool to encode vertex positions into a texture. The tool takes a sequence of Wavefront obj files and exports 2 textures. I am, for the most part, following this guide. I am using C# and Veldrid for my program. My program also shows a preview to see what the result looks like. I am having trouble getting my preview to use the textures correctly. The textures have the below mapping.
Texture 1:
RG - X Position
BA - Y Position
Texture 2:
RG - Z Position
BA - Normals, eventually haven't gotten their yet.
I have two issues. My first issue is the decoded position is not being decoded correctly. The second issue is that gl_VertexIndex seems to always be zero.
For my first issue, in order to see what was going on, I set the texture coords for the texture to 0, 0 to sample the first vertex of the first frame. I also removed any view transformation so that I could see the actual values in renderdoc.
In Renderdoc, the VS_Input is 11.67803, 1.00, -11.06608 and the VS_Out is 5.75159, 1.99283, -5.03286. When using gl_VertexIndex, all the vertices for VS_Out read the same thing.
#version 450
layout(location = 0) in vec3 Position;
layout(location = 1) in vec3 Normal;
layout(location = 2) in vec2 TexCoords;
layout(location = 3) in uint Id;
layout(location = 0) out vec3 outNormal;
layout(location = 1) out vec4 outDebug;
layout(set = 0, binding = 0) uniform MVP {
mat4 Model;
mat4 View;
mat4 Projection;
};
layout(set=0, binding=1) uniform sampler textureSampler;
layout(set=0, binding=2) uniform texture2D posTex;
layout(set=0, binding=3) uniform texture2D normalTex;
float RemapRange(float value, float from1, float to1, float from2, float to2){
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}
float DecodeFloatRG (vec2 enc){
vec2 kDecodeDot = vec2 (1.0, 1 / 255.0);
return dot(enc, kDecodeDot);
}
void main(){
outDebug = Projection * View * Model * vec4(Position, 1.0f);
vec2 coords = vec2(0, 0);
vec4 pos = textureLod(sampler2D(posTex, textureSampler), coords, 0);
vec4 normal = textureLod(sampler2D(normalTex, textureSampler), coords, 0);
vec3 decodedPos;
decodedPos.x = DecodeFloatRG(pos.xy);
decodedPos.y = DecodeFloatRG(pos.zw);
decodedPos.z = DecodeFloatRG(normal.xy);
float x = RemapRange(decodedPos.x, 0.0f, 1.0f, -13.0f, 13.0f); //right now this is hardcoded
float y = RemapRange(decodedPos.y, 0.0f, 1.0f, -13.0f, 13.0f);
float z = RemapRange(decodedPos.z, 0.0f, 1.0f, -13.0f, 13.0f);
//gl_Position = Projection * View * Model * vec4(x, y, z, 1.0f);
gl_Position = vec4(x, y, z, 1.0f);
//gl_Position = vec4(Position, 1.0f);
outNormal = Normal;
}
For the second issue, the shader is the same, but instead I'm using:
coords = vec2(gl_VertexIndex, 0)
I'm also not sure that using vertex index is the best way to go about this, as it seems like most game engines don't have this exposed.
On the CPU side, I encode the textures using the below:
//https://forum.unity.com/threads/re-map-a-number-from-one-range-to-another.119437/
protected float RemapRange(float value, float from1, float to1, float from2, float to2){
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}
//https://medium.com/tech-at-wildlife-studios/texture-animation-techniques-1daecb316657
protected Vector2 EncodeFloatRG (float v){
Vector2 kEncodeMul = new Vector2(1.0f, 255.0f);
float kEncodeBit = 1.0f / 255.0f;
Vector2 enc = kEncodeMul * v;
enc.X = fract(enc.X);
enc.Y = fract(enc.Y);
enc.X -= enc.Y * kEncodeBit;
return enc;
}
float fract(float x){
return x - MathF.Floor(x);
}
This is what the loop writing the pixels looks like. There is another one for the second texture, but it's pretty much the same.
posImg.Mutate(c => c.ProcessPixelRowsAsVector4(row =>
{
for (int x = 0; x < row.Length; x++)
{
var obj = meshes[y];
var vertex = obj.Vertices[x];
var pixel = new Vector4();
float X = RemapRange(vertex.Position.X, bounds.Min, bounds.Max, 0.0f, 1.0f);
float Y = RemapRange(vertex.Position.Y, bounds.Min, bounds.Max, 0.0f, 1.0f);
var encodedX = EncodeFloatRG(X);
var encodedY = EncodeFloatRG(Y);
pixel.X = encodedX.X;
pixel.Y = encodedX.Y;
pixel.Z = encodedY.X;
pixel.W = encodedY.Y;
row[x] = pixel;
}
y += 1;
}));
How I am creating and loading the textures in veldrid. As far as the sampler goes, it is a gd.PointSampler. I have tried turning SRGB on and off on the ImageSharpTexture() and using R8_G8_B8_A8_UNorm_SRgb and R8_G8_B8_A8_UNorm and pretty much any combo of those.
var posTex = new Veldrid.ImageSharp.ImageSharpTexture(posPath, false, true);
var normalTex = new Veldrid.ImageSharp.ImageSharpTexture(normalPath, false, true);
var posDeviceTex = posTex.CreateDeviceTexture(gd, gd.ResourceFactory);
var normalDeviceTex = normalTex.CreateDeviceTexture(gd, gd.ResourceFactory);
var posViewDesc = new TextureViewDescription(posDeviceTex, PixelFormat.R8_G8_B8_A8_UNorm_SRgb);
var normalViewDesc = new TextureViewDescription(normalDeviceTex, PixelFormat.R8_G8_B8_A8_UNorm_SRgb);
positionTexture = gd.ResourceFactory.CreateTextureView(posViewDesc);
normalTexture = gd.ResourceFactory.CreateTextureView(normalViewDesc);
EDIT:
I tried hard-coding the value of pixel (0, 0) of the texture in the shader like below. When I do this the result is correct and matches the original vertex position. When reading the pixel values of the texture in the shader and exporting them directly the values are wrong, so I am thinking there is some compression or color space weirdness going on when reading then texture in. Like in the shader the correct value for the pixel at 0,0 should be (0.9490196, 0.03529412, 0.5372549, 0.30588236), but in renderdoc it shows as (0.55492, 0.28516, 0.29102, 0.54314)
outDebug = Projection * View * Model * vec4(Position, 1.0f);
vec2 coords = vec2(0.0, 0.0);
vec4 pos = textureLod(sampler2D(posTex, textureSampler), coords, 0);
vec4 normal = textureLod(sampler2D(normalTex, textureSampler), coords, 0);
pos = vec4(0.9490196, 0.03529412, 0.5372549, 0.30588236);
normal = vec4(0.07058824, 0.96862745, 1, 1);
vec3 decodedPos;
decodedPos.x = DecodeFloatRG(pos.xy);
decodedPos.y = DecodeFloatRG(pos.zw);
decodedPos.z = DecodeFloatRG(normal.xy);
float x = RemapRange(decodedPos.x, 0.0f, 1.0f, -13.0f, 13.0f);
float y = RemapRange(decodedPos.y, 0.0f, 1.0f, -13.0f, 13.0f);
float z = RemapRange(decodedPos.z, 0.0f, 1.0f, -13.0f, 13.0f);
gl_Position = vec4(x, y, z, 1.0f);
Texture 1:
Texture 2:
Google Drive With Textures, Obj, and Metadata
So I figured this out, This code block need to be
var posTex = new Veldrid.ImageSharp.ImageSharpTexture(posPath, false, true);
var normalTex = new Veldrid.ImageSharp.ImageSharpTexture(normalPath, false, true);
var posDeviceTex = posTex.CreateDeviceTexture(gd, gd.ResourceFactory);
var normalDeviceTex = normalTex.CreateDeviceTexture(gd, gd.ResourceFactory);
var posViewDesc = new TextureViewDescription(posDeviceTex, PixelFormat.R8_G8_B8_A8_UNorm);
var normalViewDesc = new TextureViewDescription(normalDeviceTex, PixelFormat.R8_G8_B8_A8_UNorm);
positionTexture = gd.ResourceFactory.CreateTextureView(posViewDesc);
normalTexture = gd.ResourceFactory.CreateTextureView(normalViewDesc);
Instead. While fixing this someone also mentioned I needed to declare my TextureViews before my sampler if I wanted to use the same sampler for both textureViews.
as far as gl_VertexIndex goes I'm looking into how to map the data into a spare uv channel instead as this should always be available in any game engine.

Unity - Determining UVs for a circular plane mesh generated by code

I'm trying to generate a circular mesh made up of triangles with a common center at the center of the circle.
The mesh is generated properly, but the UVs are not and I am having some trouble understanding how to add them.
I assumed I would just copy the vertexes' pattern, but it didn't work out.
Here is the function:
private void _MakeMesh(int sides, float radius = 0.5f)
{
m_LiquidMesh.Clear();
float angleStep = 360.0f / (float) sides;
List<Vector3> vertexes = new List<Vector3>();
List<int> triangles = new List<int>();
List<Vector2> uvs = new List<Vector2>();
Quaternion rotation = Quaternion.Euler(0.0f, angleStep, 0.0f);
// Make first triangle.
vertexes.Add(new Vector3(0.0f, 0.0f, 0.0f));
vertexes.Add(new Vector3(radius, 0.0f, 0.0f));
vertexes.Add(rotation * vertexes[1]);
// First UV ??
uvs.Add(new Vector2(0, 0));
uvs.Add(new Vector2(1, 0));
uvs.Add(rotation * uvs[1]);
// Add triangle indices.
triangles.Add(0);
triangles.Add(1);
triangles.Add(2);
for (int i = 0; i < sides - 1; i++)
{
triangles.Add(0);
triangles.Add(vertexes.Count - 1);
triangles.Add(vertexes.Count);
// UV ??
vertexes.Add(rotation * vertexes[vertexes.Count - 1]);
}
m_LiquidMesh.vertices = vertexes.ToArray();
m_LiquidMesh.triangles = triangles.ToArray();
m_LiquidMesh.uv = uvs.ToArray();
m_LiquidMesh.RecalculateNormals();
m_LiquidMesh.RecalculateBounds();
Debug.Log("<color=yellow>Liquid mesh created</color>");
}
How does mapping UV work in a case like this?
Edit: I'm trying to use this circle as an effect of something flowing outwards from the center (think: liquid mesh for a brewing pot)
This is an old post, but maybe someone else will benefit from my solution.
So basically I gave my center point the center of the uv (0.5, 0.5) and then used the used circle formula to give every other point the uv coordinate. But of course I had to remap the cos and sin results from -1..1 to 0..1 and everything is working great.
Vector2[] uv = new Vector2[vertices.Length];
uv[uv.Length - 1] = new Vector2(0.5f, 0.5f);
for (int i = 0; i < uv.Length - 1; i++)
{
float radians = (float) i / (uv.Length - 1) * 2 * Mathf.PI;
uv[i] = new Vector2(Mathf.Cos(radians).Remap(-1f, 1f, 0f, 1f), Mathf.Sin(radians).Remap(-1f, 1f, 0f, 1f));
}
mesh.uv = uv;
Where the remap is an extension like this and it basically take a value in a range and remaps it to another range (in this case from -1..1 to 0..1):
public static float Remap(this float value, float from1, float to1, float from2, float to2) {
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}

How to check if device has been rotated on all axis in Unity

I want to check in Unity if the device has been rotated on all of it's axis.
So, I am reading the rotation of all the axis.
What should I do in order to validate for example that the user has "flipped" his device over the X-axis? I need to check the value, and see that they contain 0, 90, 180 and 270 degrees in a loop.
Here is part of my code:
void Update () {
float X = Input.acceleration.x;
float Y = Input.acceleration.y;
float Z = Input.acceleration.z;
xText.text = ((Mathf.Atan2(Y, Z) * 180 / Mathf.PI)+180).ToString();
yText.text = ((Mathf.Atan2(X, Z) * 180 / Mathf.PI)+180).ToString();
zText.text = ((Mathf.Atan2(X, Y) * 180 / Mathf.PI)+180).ToString();
}
The accelerometer only tells you if the acceleration of the device changes. So you will have values if the device started moving, or stopped moving. You can't retrieve its orientation from that.
Instead you need to use the gyroscope of the device. Most device have one nowadays.
Fortunately, Unity supports the gyroscope through the Gyroscope class
Simply using
Input.gyro.attitude
Will give you the orientation of the device in space, in the form of a quaternion.
To check the angles, use the eulerAngles function, for instance, is the device flipped in the x axis:
Vector3 angles = Input.gyro.attitude.eulerAngles;
bool xFlipped = angles.x > 180;
Be careful, you might have to invert some values if you want to apply the rotation in Unity (because it depend which orientation the devices uses for positive values, left or right)
// The Gyroscope is right-handed. Unity is left handed.
// Make the necessary change to the camera.
private static Quaternion GyroToUnity(Quaternion q)
{
return new Quaternion(q.x, q.y, -q.z, -q.w);
}
Here is the full example from the doc (Unity version 2017.3), in case the link above is broken. It shows how to read value from the gyroscope, and apply them to an object in Unity.
// Create a cube with camera vector names on the faces.
// Allow the device to show named faces as it is oriented.
using UnityEngine;
public class ExampleScript : MonoBehaviour
{
// Faces for 6 sides of the cube
private GameObject[] quads = new GameObject[6];
// Textures for each quad, should be +X, +Y etc
// with appropriate colors, red, green, blue, etc
public Texture[] labels;
void Start()
{
// make camera solid colour and based at the origin
GetComponent<Camera>().backgroundColor = new Color(49.0f / 255.0f, 77.0f / 255.0f, 121.0f / 255.0f);
GetComponent<Camera>().transform.position = new Vector3(0, 0, 0);
GetComponent<Camera>().clearFlags = CameraClearFlags.SolidColor;
// create the six quads forming the sides of a cube
GameObject quad = GameObject.CreatePrimitive(PrimitiveType.Quad);
quads[0] = createQuad(quad, new Vector3(1, 0, 0), new Vector3(0, 90, 0), "plus x",
new Color(0.90f, 0.10f, 0.10f, 1), labels[0]);
quads[1] = createQuad(quad, new Vector3(0, 1, 0), new Vector3(-90, 0, 0), "plus y",
new Color(0.10f, 0.90f, 0.10f, 1), labels[1]);
quads[2] = createQuad(quad, new Vector3(0, 0, 1), new Vector3(0, 0, 0), "plus z",
new Color(0.10f, 0.10f, 0.90f, 1), labels[2]);
quads[3] = createQuad(quad, new Vector3(-1, 0, 0), new Vector3(0, -90, 0), "neg x",
new Color(0.90f, 0.50f, 0.50f, 1), labels[3]);
quads[4] = createQuad(quad, new Vector3(0, -1, 0), new Vector3(90, 0, 0), "neg y",
new Color(0.50f, 0.90f, 0.50f, 1), labels[4]);
quads[5] = createQuad(quad, new Vector3(0, 0, -1), new Vector3(0, 180, 0), "neg z",
new Color(0.50f, 0.50f, 0.90f, 1), labels[5]);
GameObject.Destroy(quad);
}
// make a quad for one side of the cube
GameObject createQuad(GameObject quad, Vector3 pos, Vector3 rot, string name, Color col, Texture t)
{
Quaternion quat = Quaternion.Euler(rot);
GameObject GO = Instantiate(quad, pos, quat);
GO.name = name;
GO.GetComponent<Renderer>().material.color = col;
GO.GetComponent<Renderer>().material.mainTexture = t;
GO.transform.localScale += new Vector3(0.25f, 0.25f, 0.25f);
return GO;
}
protected void Update()
{
GyroModifyCamera();
}
protected void OnGUI()
{
GUI.skin.label.fontSize = Screen.width / 40;
GUILayout.Label("Orientation: " + Screen.orientation);
GUILayout.Label("input.gyro.attitude: " + Input.gyro.attitude);
GUILayout.Label("iphone width/font: " + Screen.width + " : " + GUI.skin.label.fontSize);
}
/********************************************/
// The Gyroscope is right-handed. Unity is left handed.
// Make the necessary change to the camera.
void GyroModifyCamera()
{
transform.rotation = GyroToUnity(Input.gyro.attitude);
}
private static Quaternion GyroToUnity(Quaternion q)
{
return new Quaternion(q.x, q.y, -q.z, -q.w);
}
}

OpenTK GL.Translate 2D camera on GLControl

I am doing some ascii game, via using OpenTK & Winforms. And stack with Camera movement and view in 2D.
Those code i am using for mouse event and translate positions:
public static Point convertScreenToWorldCoords(int x, int y)
{
int[] viewport = new int[4];
Matrix4 modelViewMatrix, projectionMatrix;
GL.GetFloat(GetPName.ModelviewMatrix, out modelViewMatrix);
GL.GetFloat(GetPName.ProjectionMatrix, out projectionMatrix);
GL.GetInteger(GetPName.Viewport, viewport);
Vector2 mouse;
mouse.X = x;
mouse.Y = y;
Vector4 vector = UnProject(ref projectionMatrix, modelViewMatrix, new Size(viewport[2], viewport[3]), mouse);
Point coords = new Point((int)vector.X, (int)vector.Y);
return coords;
}
public static Vector4 UnProject(ref Matrix4 projection, Matrix4 view, Size viewport, Vector2 mouse)
{
Vector4 vec;
vec.X = 2.0f * mouse.X / (float)viewport.Width - 1;
vec.Y = 2.0f * mouse.Y / (float)viewport.Height - 1;
vec.Z = 0;
vec.W = 1.0f;
Matrix4 viewInv = Matrix4.Invert(view);
Matrix4 projInv = Matrix4.Invert(projection);
Vector4.Transform(ref vec, ref projInv, out vec);
Vector4.Transform(ref vec, ref viewInv, out vec);
if (vec.W > float.Epsilon || vec.W < float.Epsilon)
{
vec.X /= vec.W;
vec.Y /= vec.W;
vec.Z /= vec.W;
}
return vec;
}
//on mouse click event
Control control = sender as Control;
Point worldCoords = convertScreenToWorldCoords(e.X, control.ClientRectangle.Height - e.Y);
playerX = (int)Math.Floor((double)worldCoords.X / 9d);
playerY = (int)Math.Floor((double)worldCoords.Y / 9d);
And those code will setup my Projection, but, something wrong here...
//Set Projection
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(0, Width * charWidth * scale, Height * charHeight * scale, 0, -1, 1);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
GL.Translate(playerX, playerY, 0);
Well, my problem is GL.Translate. In this case, they not focused on playerX,Y and movement of "camera" seems reversed. If i am put GL.Translate(-playerX, -playerY, 0); - They seems moves correct, but anyways View seems not focused on player Object (player object should be always on center position of view, typical top-down view camera). But I dont know how to setting up them correctly. My experiments, multiple, devide, etc. with my X,Y pos - does not give me correct view. How it should be in this case?

Drawing a textured quad using XNA

I'm attempting to render a textured quad using the example located here.
I can successfully render the quad, but the texture information appears to be lost. The quad takes the color of the underlying texture, though.
I've checked the obvious problems ("Does the BasicEffect rendering the quad have the TextureEnabled property set to true?") and can't immediately see the problem.
Code below:
public class Quad
{
public VertexPositionNormalTexture[] Vertices;
public Vector3 Origin;
public Vector3 Up;
public Vector3 Normal;
public Vector3 Left;
public Vector3 UpperLeft;
public Vector3 UpperRight;
public Vector3 LowerLeft;
public Vector3 LowerRight;
public int[] Indexes;
public Quad(Vector3 origin, Vector3 normal, Vector3 up,
float width, float height)
{
this.Vertices = new VertexPositionNormalTexture[4];
this.Indexes = new int[6];
this.Origin = origin;
this.Normal = normal;
this.Up = up;
// Calculate the quad corners
this.Left = Vector3.Cross(normal, this.Up);
Vector3 uppercenter = (this.Up * height / 2) + origin;
this.UpperLeft = uppercenter + (this.Left * width / 2);
this.UpperRight = uppercenter - (this.Left * width / 2);
this.LowerLeft = this.UpperLeft - (this.Up * height);
this.LowerRight = this.UpperRight - (this.Up * height);
this.FillVertices();
}
private void FillVertices()
{
Vector2 textureUpperLeft = new Vector2(0.0f, 0.0f);
Vector2 textureUpperRight = new Vector2(1.0f, 0.0f);
Vector2 textureLowerLeft = new Vector2(0.0f, 1.0f);
Vector2 textureLowerRight = new Vector2(1.0f, 1.0f);
for (int i = 0; i < this.Vertices.Length; i++)
{
this.Vertices[i].Normal = this.Normal;
}
this.Vertices[0].Position = this.LowerLeft;
this.Vertices[0].TextureCoordinate = textureLowerLeft;
this.Vertices[1].Position = this.UpperLeft;
this.Vertices[1].TextureCoordinate = textureUpperLeft;
this.Vertices[2].Position = this.LowerRight;
this.Vertices[2].TextureCoordinate = textureLowerRight;
this.Vertices[3].Position = this.UpperRight;
this.Vertices[3].TextureCoordinate = textureUpperRight;
this.Indexes[0] = 0;
this.Indexes[1] = 1;
this.Indexes[2] = 2;
this.Indexes[3] = 2;
this.Indexes[4] = 1;
this.Indexes[5] = 3;
}
}
this.quadEffect = new BasicEffect(this.GraphicsDevice, null);
this.quadEffect.AmbientLightColor = new Vector3(0.8f, 0.8f, 0.8f);
this.quadEffect.LightingEnabled = true;
this.quadEffect.World = Matrix.Identity;
this.quadEffect.View = this.View;
this.quadEffect.Projection = this.Projection;
this.quadEffect.TextureEnabled = true;
this.quadEffect.Texture = someTexture;
this.quad = new Quad(Vector3.Zero, Vector3.UnitZ, Vector3.Up, 2, 2);
this.quadVertexDecl = new VertexDeclaration(this.GraphicsDevice, VertexPositionNormalTexture.VertexElements);
public override void Draw(GameTime gameTime)
{
this.GraphicsDevice.Textures[0] = this.SpriteDictionary["B1S1I800"];
this.GraphicsDevice.VertexDeclaration = quadVertexDecl;
quadEffect.Begin();
foreach (EffectPass pass in quadEffect.CurrentTechnique.Passes)
{
pass.Begin();
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionNormalTexture>(
PrimitiveType.TriangleList,
beamQuad.Vertices, 0, 4,
beamQuad.Indexes, 0, 2);
pass.End();
}
quadEffect.End();
}
From what I can see, this should work. The only thing I can imagine, which isn't in this code, is that the loading of the texture goes wrong somewhere. I also can't quite visualize what you mean that the quad has the underlying color of the texture? Do you have a screenshot for us?
Also, if something does show up, a very distorted version of your texture for example, it could be possible that the rendering of other stuff has effect on the rendering of the quad. For example if you draw the quad while the graphicsdevice has another vertex declaration on it, or if the previous thing rendered set some exotic rendering state, or if you're drawing the quad within the drawing code of something else. Try isolating this code, into a fresh project or something, or disable the rendering of everything else.

Categories

Resources