In my application i have a problem with passing my world, view and projection matrix to the shader. I have set up a little engine to perform those tasks and i am using PIX and visual studio to debug what i get as output.
First i post the code that relates to the vertices and indices:
Rendering.Geometry.MeshGeometry<uint> geom = Rendering.Geometry.MeshGeometry<uint>.Create(device);
var elem = Rendering.Geometry.VertexElement.CreatePosition3D(device);
float[] vertices = new float[9]
{
0, 0, -3,
0, 0, 3,
0, 5, 0,
};
elem.DataStream.WriteRange(vertices);
geom.AddVertexElement(elem);
var triangle = geom.Triangles.AddFace();
triangle.P1 = 0;
triangle.P2 = 1;
triangle.P3 = 2;
The geometry seems to be correct because when i debug my draw call in PIX i get the correct values for the vertices (0/0/-3)/(0/0/3)/(0/5/0) so i think index buffer, vertex buffer, input layout and polygon topology are all set up correctly.
Now in PIX i have that interesting Pre-VS, Post-VS view. Pre-VS as i told everything looks fine, the vertices are correct in the right order. When i go to Post-VS and debug a vertex i end up in my shader where i can go through the instructions.
Now what is not correct are the matrices passed to it with the constant buffer. Here is my shader:
cbuffer MatrixBuffer
{
float4x4 worldMatrix;
float4x4 viewMatrix;
float4x4 projectionMatrix;
};
struct VertexInputType
{
float4 position : POSITION;
};
struct PixelInputType
{
float4 position : SV_POSITION;
};
PixelInputType BasicEffectVS(VertexInputType input)
{
PixelInputType output = (PixelInputType)0;
float4x4 worldViewProj = worldMatrix * viewMatrix * projectionMatrix;
output.position = mul(input.position, worldViewProj);
output.position.w = 1.0f;
return output;
}
When i have a look in PIX for the three matrices i see that except for the worldMatrix they have completely wrong values (even NaN is contained) for viewMatrix and projectionMatrix. The way i set the matrices in my application is the following:
basicEffect.WorldMatrix = SlimDX.Matrix.Identity;
basicEffect.ViewMatrix = SlimDX.Matrix.Transpose(SlimDX.Matrix.LookAtLH(new SlimDX.Vector3(20, 5, 0), new SlimDX.Vector3(0, 5, 0), new SlimDX.Vector3(0, 1, 0)));
basicEffect.ProjectionMatrix = SlimDX.Matrix.Transpose(SlimDX.Matrix.PerspectiveFovLH((float)Math.PI / 4, ((float)f.ClientSize.Width / f.ClientSize.Height), 1.0f, 100.0f));
Debugging them in VS gives me the correct values. I then follow the SetValue call on the shader until i get to the actual writing of bytes. Everything is fine there!
The buffer is created the following way:
holder.buffer = new SlimDX.Direct3D11.Buffer(mShader.Device, new BufferDescription()
{
BindFlags = BindFlags.ConstantBuffer,
SizeInBytes = buffer.Description.Size,
Usage = ResourceUsage.Dynamic,
CpuAccessFlags = CpuAccessFlags.Write
});
Even worse:
If i add another matrix parameter to my shader and i set a hardcoded matrix for that value, like:
Matrix mat = new Matrix()
{
M11 = 1,
M12 = 2,
...
};
in PIX i get exactly the values i expect. So my function setting values to the shader must be right.
Anyone has an idea where this comes from?
Make sure you remove this line:
output.position.w = 1.0f;
This is the projective component, since you already multiplied by your projection matrix you need to send it as it is to the pixel shader.
Also I would be quite careful with all the transposes, i'm not sure they are really needed.
Related
I'm trying to put together information from this tutorial (Advanced-OpenGL/Instancing) and these answers (How to render using 2 VBO) (New API Clarification) in order to render instances of a square, giving the model matrix for each instance to the shader through an ArrayBuffer. The code I ended with is the following. I sliced and tested any part, and the problem seems to be the model matrix itself is not passed correctly to the shader. I'm using OpenTK in Visual Studio.
For simplicity and debugging, the pool contains just a single square, so I don't still have divisor problems or other funny things I still don't cope with.
My vertex data arrays contain the 3 floats for position and 4 floats for color (stride = 7 time float size).
My results with the attached code are:
if I remove the imodel multiplication in the vertex shader, I get exactly what I expect, a red square (rendered as 2 triangles) with a green border (rendered as a line loop).
if I change the shader and I multiply by the model matrix, I get a red line above the center of the screen which is changing its length over time. The animation makes sense because the simulation is rotating the square, so the angle updates regularly and thus the model matrix calculated changes. Another great result because I'm actually sending dynamic data to the shader. Howvere I can't have my original square rotated and translated.
Any clue?
Thanks a lot.
Vertex Shader:
#version 430 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec4 aCol;
layout (location = 2) in mat4 imodel;
out vec4 fColor;
uniform mat4 view;
uniform mat4 projection;
void main() {
fColor = aCol;
gl_Position = vec4(aPos, 1.0) * imodel * view * projection;
}
Fragment Shader:
#version 430 core
in vec4 fColor;
out vec4 FragColor;
void main() {
FragColor = fColor;
}
OnLoad snippet (initialization):
InstanceVBO = GL.GenBuffer();
GL.GenBuffers(2, VBO);
GL.BindBuffer(BufferTarget.ArrayBuffer, VBO[0]);
GL.BufferData(BufferTarget.ArrayBuffer,
7 * LineLoopVertCount * sizeof(float),
LineLoopVertData, BufferUsageHint.StaticDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, VBO[1]);
GL.BufferData(BufferTarget.ArrayBuffer,
7 * TrianglesVertCount * sizeof(float),
TrianglesVertData, BufferUsageHint.StaticDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
// VAO SETUP
VAO = GL.GenVertexArray();
GL.BindVertexArray(VAO);
// Position
GL.EnableVertexAttribArray(0);
GL.VertexAttribFormat(0, 3, VertexAttribType.Float, false, 0);
GL.VertexArrayAttribBinding(VAO, 0, 0);
// COlor
GL.EnableVertexAttribArray(1);
GL.VertexAttribFormat(1, 4, VertexAttribType.Float, false, 3 * sizeof(float));
GL.VertexArrayAttribBinding(VAO, 1, 0);
int vec4Size = 4;
GL.EnableVertexAttribArray(2);
GL.VertexAttribFormat(2, 4, VertexAttribType.Float, false, 0 * vec4Size * sizeof(float));
GL.VertexAttribFormat(3, 4, VertexAttribType.Float, false, 1 * vec4Size * sizeof(float));
GL.VertexAttribFormat(4, 4, VertexAttribType.Float, false, 2 * vec4Size * sizeof(float));
GL.VertexAttribFormat(5, 4, VertexAttribType.Float, false, 3 * vec4Size * sizeof(float));
GL.VertexAttribDivisor(2, 1);
GL.VertexAttribDivisor(3, 1);
GL.VertexAttribDivisor(4, 1);
GL.VertexAttribDivisor(5, 1);
GL.VertexArrayAttribBinding(VAO, 2, 1);
GL.BindVertexArray(0);
OnFrameRender snippet:
shader.Use();
shader.SetMatrix4("view", cameraViewMatrix);
shader.SetMatrix4("projection", cameraProjectionMatrix);
int mat4Size = 16;
for (int i = 0; i < simulation.poolCount; i++)
{
modelMatrix[i] = Matrix4.CreateFromAxisAngle(
this.RotationAxis, simulation.pool[i].Angle);
modelMatrix[i] = matrix[i] * Matrix4.CreateTranslation(new Vector3(
simulation.pool[i].Position.X,
simulation.pool[i].Position.Y,
0f));
//modelMatrix[i] = Matrix4.Identity;
}
// Copy model matrices into the VBO
// ----------------------------------------
GL.BindBuffer(BufferTarget.ArrayBuffer, InstanceVBO);
GL.BufferData(BufferTarget.ArrayBuffer,
simulation.poolCount * mat4Size * sizeof(float),
modelMatrix, BufferUsageHint.DynamicDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
// ----------------------------------------
GL.BindVertexArray(VAO);
GL.BindVertexBuffer(1, InstanceVBO, IntPtr.Zero, mat4Size * sizeof(float));
GL.BindVertexBuffer(0, VBO[0], IntPtr.Zero, 7 * sizeof(float));
GL.DrawArraysInstanced(PrimitiveType.LineLoop, 0, LineLoopVertCount, simulation.poolCount);
GL.BindVertexBuffer(0, lifeFormVBO[1], IntPtr.Zero, lifeFormTrianglesFStride * sizeof(float));
GL.DrawArraysInstanced(PrimitiveType.Triangles, 0, TrianglesVertCount, simulation.poolCount);
GL.BindVertexArray(0);
There is a lot wrong here.
First, you don't enable any of the attribute arrays after 2, even though your shader says that you're reading 3-5 too. Similarly, you don't set the attribute binding for any of the arrays after 2.
But your bigger problem is that you use glVertexAttribDivisor. That's the wrong function for what you're trying to do. That's the old API for setting the divisor.
In separate attribute format, the divisor is part of the buffer binding, not the vertex attribute. So the divisor needs to be set with glVertexBindingDivisor, and the index it is given is the index you intend to bind the buffer to. Which should be 1.
So presumably, your code should look like:
int vec4Size = 4;
for(int ix = 0; ix < 4; ++ix)
{
int attribIx = 2 + ix;
GL.EnableVertexAttribArray(attribIx);
GL.VertexAttribFormat(attribIx, 4, VertexAttribType.Float, false, ix * vec4Size * sizeof(float));
GL.VertexArrayAttribBinding(VAO, attribIx, 1); //All use the same buffer binding
}
GL.VertexBindingDivisor(1, 1);
In opentk the matrices are sent by columns not by rows as usual so it is necessary to invert the rows by columns, you can do it like this.
private Matrix4[] TransposeMatrix(Matrix4[] inputModel)
{
var outputModel = new Matrix4[inputModel.Length];
for(int i = 0; i < inputModel.Length; i++)
{
outputModel[i].Row0 = inputModel[i].Column0;
outputModel[i].Row1 = inputModel[i].Column1;
outputModel[i].Row2 = inputModel[i].Column2;
outputModel[i].Row3 = inputModel[i].Column3;
}
return outputModel;
}
I want to get the current Camera Image from the ARCore session. I am using the Frame.CameraImage.AcquireCameraImageBytes() method to get this image. Then i am loading this image to a Texture2D in format of TextureFormat.R8. But the texture is red and upside down. I know that the format ARCore using is YUV but i could not find a way to convert this format to RGB. How can i do this?
There are 2 or 3 questions about this issue but no solution is given.
Code is given below:
CameraImageBytes image = Frame.CameraImage.AcquireCameraImageBytes();
int width = image.Width;
int height = image.Height;
int size = width*height;
Texture2D texture = new Texture2D(width, height, TextureFormat.R8, false, false);
byte[] m_EdgeImage = new byte[size];
System.Runtime.InteropServices.Marshal.Copy(image.Y, m_EdgeImage, 0, size);
texture.LoadRawTextureData(m_EdgeImage);
texture.Apply();
Result image:
In the code you included, you are copying the Y channel of the camera texture (image.Y) to a single channel RGB texture (TextureFormat.R8), without doing any conversion.
YUV and RGB have three channels, but you're using only one.
In RGB the channels usually have the same size, but in YUV they are often different. U and V can be a a fraction of the size of Y, the specific fraction depending on the format used.
Since this texture is coming from the Android camera, the specific format should be Y′UV420p, which is a planar format, see the Wikipedia page for a useful visual representation of how the channel values are grouped:
The CameraImageBytes API structure requires you to extract the channels separately, and then put them back together again programmatically.
FYI there is an easier way to get an already converted to RGB camera texture, but it can only be accessed through a shader, not C# code.
Assuming you still want to do this in C#, to gather all the channels from the YUV texture you need to treat the UV channels differently from the Y channel.
You must create a separate buffer for the UV channels. There is an example of how to do this in an issue on the Unity-Technologies/experimental-ARInterface github repo:
//We expect 2 bytes per pixel, interleaved U/V, with 2x2 subsampling
bufferSize = imageBytes.Width * imageBytes.Height / 2;
cameraImage.uv = new byte[bufferSize];
//Because U an V planes are returned separately, while remote expects interleaved U/V
//same as ARKit, we merge the buffers ourselves
unsafe
{
fixed (byte* uvPtr = cameraImage.uv)
{
byte* UV = uvPtr;
byte* U = (byte*) imageBytes.U.ToPointer();
byte* V = (byte*) imageBytes.V.ToPointer();
for (int i = 0; i < bufferSize; i+= 2)
{
*UV++ = *U;
*UV++ = *V;
U += imageBytes.UVPixelStride;
V += imageBytes.UVPixelStride;
}
}
}
This code will produce raw texture data that can be loaded into a Texture2D of format TextureFormat.RG16:
Texture2D texUVchannels = new Texture2D(imageBytes.Width / 2, imageBytes.Height / 2, TextureFormat.RG16, false, false);
texUVchannels.LoadRawTextureData(rawImageUV);
texUVchannels.Apply();
Now that you have all 3 channels in stored in 2 Texture2D, you can convert them either through a shader, or in C#.
The specific conversion formula to use for the Android camera YUV image can be found on the YUV wikipedia page:
void YUVImage::yuv2rgb(uint8_t yValue, uint8_t uValue, uint8_t vValue,
uint8_t *r, uint8_t *g, uint8_t *b) const {
int rTmp = yValue + (1.370705 * (vValue-128));
int gTmp = yValue - (0.698001 * (vValue-128)) - (0.337633 * (uValue-128));
int bTmp = yValue + (1.732446 * (uValue-128));
*r = clamp(rTmp, 0, 255);
*g = clamp(gTmp, 0, 255);
*b = clamp(bTmp, 0, 255);
}
translated into a Unity shader that would be:
float3 YUVtoRGB(float3 c)
{
float yVal = c.x;
float uVal = c.y;
float vVal = c.z;
float r = yVal + 1.370705 * (vVal - 0.5);
float g = yVal - 0.698001 * (vVal - 0.5) - (0.337633 * (uVal - 0.5));
float b = yVal + 1.732446 * (uVal - 0.5);
return float3(r, g, b);
}
The texture obtained this way is of a different size compared to the background video coming from ARCore, so if you want them to match on screen, you'll need to use the UVs and other data coming from Frame.CameraImage.
So to pass the UVs to the shader:
var uvQuad = Frame.CameraImage.ImageDisplayUvs;
mat.SetVector("_UvTopLeftRight",
new Vector4(uvQuad.TopLeft.x, uvQuad.TopLeft.y, uvQuad.TopRight.x, uvQuad.TopRight.y));
mat.SetVector("_UvBottomLeftRight",
new Vector4(uvQuad.BottomLeft.x, uvQuad.BottomLeft.y, uvQuad.BottomRight.x, uvQuad.BottomRight.y));
camera.projectionMatrix = Frame.CameraImage.GetCameraProjectionMatrix(camera.nearClipPlane, camera.farClipPlane);
and to use them in the shader you'll need to lerp them as in EdgeDetectionBackground shader.
In this same shader you'll find an example of how to access the RGB camera image from a shader, without having to do any conversion, which may turn out to be easier for your use case.
There are a few requirements for that:
the shader must be in glsl
it can only be done in OpenGL ES3
the GL_OES_EGL_image_external_essl3 extension needs to be supported
This is C# WPF using SharpDX 4.0.
I'm trying to update a dynamic texture on each render loop using a color buffer generated from a library. I'm seeing an issue where the resulting texture doesn't match the expected bitmap. The texture appears to be wider than expect or the format is larger than expected.
var surfaceWidth = 200; var surfaceHeight = 200;
var pixelBytes = surfaceWidth * surfaceHeight * 4;
//Set up the color buffer and byte array to stream to the texture
_colorBuffer = new int[surfaceWidth * surfaceHeight];
_textureStreamBytes = new byte[pixelBytes]; //16000 length
//Create the texture to update
_scanTexture = new Texture2D(Device, new Texture2DDescription()
{
Format = Format.B8G8R8A8_UNorm,
ArraySize = 1,
MipLevels = 1,
Width = SurfaceWidth,
Height = SurfaceHeight,
SampleDescription = new SampleDescription(1, 0),
Usage = ResourceUsage.Dynamic,
BindFlags = BindFlags.ShaderResource,
CpuAccessFlags = CpuAccessFlags.Write,
OptionFlags = ResourceOptionFlags.None,
});
_scanResourceView= new ShaderResourceView(Device, _scanTexture);
context.PixelShader.SetShaderResource(0, _scanResourceView);
And on render I populate the color buffer and write to the texture.
protected void Render()
{
Device.ImmediateContext.ClearRenderTargetView(
RenderTargetView, new SharpDX.Mathematics.Interop.RawColor4(0.8f,0.8f,0,1));
Library.GenerateColorBuffer(ref _colorBuffer);
System.Buffer.BlockCopy(_colorBuffer, 0, depthPixels, 0, depthPixels.Length);
_parent.DrawBitmap(ref _colorBuffer);
DataBox databox = context.MapSubresource(_scanTexture, 0, MapMode.WriteDiscard, SharpDX.Direct3D11.MapFlags.None, out DataStream stream);
if (!databox.IsEmpty)
stream.Write(_textureStreamBytes, 0, _textureStreamBytes.Length);
context.UnmapSubresource(_scanTexture, 0);
context.Draw(4, 0);
}
Sampler creation and setting before the above happens:
var sampler = new SamplerState(_device, new SamplerStateDescription()
{
Filter = SharpDX.Direct3D11.Filter.MinMagMipLinear,
AddressU = TextureAddressMode.Wrap,
AddressV = TextureAddressMode.Wrap,
AddressW = TextureAddressMode.Wrap,
BorderColor = SharpDX.Color.Blue,
ComparisonFunction = Comparison.Never,
MaximumAnisotropy = 1,
MipLodBias = 0,
MinimumLod = 0,
MaximumLod = 0,
});
context = _device.ImmediateContext;
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleStrip;
context.VertexShader.Set(vertexShader);
context.Rasterizer.SetViewport(new Viewport(0, 0, SurfaceWidth, SurfaceHeight, 0.0f, 1.0f));
context.PixelShader.Set(pixelShader);
context.PixelShader.SetSampler(0, sampler);
context.OutputMerger.SetTargets(depthView, _renderTargetView);
And shader (using a full screen triangle with no vertices):
SamplerState pictureSampler;
Texture2D picture;
struct PS_IN
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
PS_IN VS(uint vI : SV_VERTEXID)
{
float2 texcoord = float2(vI & 1,vI >> 1); //you can use these for texture coordinates later
PS_IN output = (PS_IN)0;
output.pos = float4((texcoord.x - 0.5f) * 2, -(texcoord.y - 0.5f) * 2, 0, 1);
output.tex = texcoord;
return output;
}
float4 PS(PS_IN input) : SV_Target
{
return picture.Sample(pictureSampler, input.tex);
}
What I'm seeing is:
_colorBuffer length 40000 (200 width *200 height)
_textureStreamBytes length 160000 (200 * 200 * 4bytes)
Stream from databox Length = 179200 difference of 19200 bytes / 4800 pixels.
This translates to 24 rows of 200 pixel width. In other words the texture is 24 pixels wider than expected. But debugging shows width/height as 200.
Image showing the issue. Left is rendered view, right is bitmap
Does anyone know what I'm doing wrong here? Or things that should/could be done differently?
Thank you.
P.S. I've got this working correctly in OpenGL by using a similar process but need to get it working for directx:
gl.TexSubImage2D(OpenGL.GL_TEXTURE_2D, 0, 0, 0, (int)width, (int)height, OpenGL.GL_RGBA, OpenGL.GL_UNSIGNED_BYTE, colorBuffer);
From experimenting it appears that multiples of 32 are needed for both width and height. For example 100 * 128 even though a multiple of 32 will cause an issue. Instead I'm using:
var newHeight = (int)(initialHeight / 32) * 32;
var newWidth = (int)(initialWidth / 32) * 32;
I'm not sure if the root issue is my own mistake or a sharpDX issue or a DirectX issue. The other way I see to solve this issue is to add padding to the pixel array to account for the difference in length at the end of each row.
I'm trying to draw a large graph (~3,000,000 vertices, ~5,000,000 edges) using OpenTK.
However I can't seem to get it working.
I creating a VBO containing the positions of all the vertices like so
// build the coords list
float[] coords = new float[vertices.Length * 3];
Dictionary<int, int> vertexIndexMap = new Dictioanry<int, int>();
int count = 0, i = 0;
foreach (Vertex v in vertices) {
vertexIndexMap[v.Id] = i++;
coords[count++] = v.x;
coords[count++] = v.y;
coords[count++] = v.z;
}
// build the index list
int[] indices = new int[edges.Length * 2];
count = 0;
foreach (Edge e in edges) {
indices[count++] = vertexIndexMap[e.First.Id];
indices[count++] = vertexIndexMap[e.Second.Id];
}
// bind the buffers
int[] bufferPtrs = new int[2];
GL.GenBuffers(2, bufferPtrs);
GL.EnableClientState(ArrayCap.VertexArray);
GL.EnableClientState(ArrayCap.IndexArray);
// buffer the vertex data
GL.BindBuffer(BufferTarget.ArrayBuffer, bufferPtrs[0]);
GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(coords.Length * sizeof(float)), coords, BufferUsageHint.StaticDraw);
GL.VertexPointer(3, VertexPointerType.Float, 0, IntPtr.Zero); // tell opengl we have a closely packed vertex array
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
// buffer the index data
GL.BindBuffer(BufferTarget.ElementArrayBuffer, bufferPtrs[1]);
GL.BufferData(BufferTarget.ElementArrayBuffer, (IntPtr)(indices.Length * sizeof(int)), indices, BufferUsageHint.StaticDraw);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, 0);
And I attempt to draw the buffers like so:
// draw the vertices
GL.BindBuffer(BufferTarget.ArrayBuffer, bufferPtrs[0]);
GL.Color3(Color.Blue);
GL.DrawArrays(PrimitiveType.Points, 0, coords.Length);
// draw the edges
GL.BindBuffer(BufferTarget.ElementArrayBuffer, bufferPtrs[1]);
GL.Color3(Color.Red);
GL.DrawElements(PrimitiveType.Lines, indices.Length, DrawElementsType.UnsignedInt, bufferPtrs[1]);
When I run this, all of the vertices draw as expected in all of their correct locations,
However about half of the edges are drawn joining a vertex to the origin.
To sanity check I tried drawing the edges with a Begin/End block, and they all drew correctly.
Could someone please point out how am I misusing the VBOs?
The last argument to your DrawElements() call is wrong:
GL.DrawElements(PrimitiveType.Lines, indices.Length, DrawElementsType.UnsignedInt,
bufferPtrs[1]);
Without an element array buffer bund, the last argument to DrawElements() is a pointer to the indices. If an element array buffer is bound (which is the case in your code), the last argument is an offset into the buffer. To use the whole buffer, the offset is 0:
GL.DrawElements(PrimitiveType.Lines, indices.Length, DrawElementsType.UnsignedInt, 0);
You probably also want to remove this call:
GL.EnableClientState(ArrayCap.IndexArray);
This is not for enabling vertex indices, but for color indices. This would be for color index mode, which is a very obsolete feature.
I have a very-very strange problem with my project. I tried to send an index number with my vertices, and use this number in the HLSL shader.
But, when I set this number to a value, like, 1, the shader get a very wide spectrum from that number - going down to negative value, between 0-1 and above 1. Even when I give in nonsense numbers, like 10000
(I use C# with SlimDX, and this is made with pixel shader 2.0 and I tried 3.0 as well.)
int c = 0;
int testValue = 10000;
for (int i = 0; i < tris.Length; i++)
{
vertices[c].Position = tris[i].p0;
vertices[c].Normal = tris[i].normal;
vertices[c].Index = testValue;
vertices[c++].Color = Color.LightBlue.ToArgb();
vertices[c].Position = tris[i].p1;
vertices[c].Normal = tris[i].normal;
vertices[c].Index = testValue;
vertices[c++].Color = Color.LightBlue.ToArgb();
vertices[c].Position = tris[i].p2;
vertices[c].Normal = tris[i].normal;
vertices[c].Index = testValue;
vertices[c++].Color = Color.LightBlue.ToArgb();
}
This is how I create my vertices. Everything works as they should work, except the Index value. I get the colors right, I get the normals right, I get the position right.
Here is the HLSL code:
VertexToPixel IndexedRenderedVS( float4 inPos : POSITION, float4 inColor : COLOR0, float3 inNormal : NORMAL, int inIndex : TEXCOORD0)
{
VertexToPixel Output = (VertexToPixel)0;
float4x4 xWorldViewProjection = mul(xWorld, xViewProjection);
Output.Position = mul(inPos, xWorldViewProjection);
if (inIndex < 0)
Output.Color = float4(1, 0, 0, 1);
if (inIndex > 0 && inIndex < 1)
Output.Color = float4(0, 1, 0, 1);
if (inIndex > 1)
Output.Color = float4(0, 0, 1, 1);
//Output.Color = inColor;
return Output;
}
PixelToFrame IndexedRenderedPS(VertexToPixel PSIN)
{
PixelToFrame Output = (PixelToFrame)0;
Output.Color = PSIN.Color;
return Output;
}
And finally, my VertexFormat struct:
internal Vector3 Position;
internal int Color;
internal Vector3 Normal;
internal int Index;
internal static VertexElement[] VertexElements = new VertexElement[]
{
new VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.Position, 0),
new VertexElement(0, sizeof(float) * 3, DeclarationType.Color, DeclarationMethod.Default, DeclarationUsage.Color, 0),
new VertexElement(0, sizeof(float) * 7, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.Normal, 0),
new VertexElement(0, sizeof(float) * 10, DeclarationType.Float1, DeclarationMethod.Default, DeclarationUsage.TextureCoordinate, 0),
VertexElement.VertexDeclarationEnd
};
internal static VertexFormat Format
{
get { return VertexFormat.Position | VertexFormat.Diffuse | VertexFormat.Normal | VertexFormat.Texture1; }
}
With this code, and an insane index value (10000 - I get a same picture, no matter what values I give in, 0, 1, or even when I put negative numbers in. It just dont care what I give in).
I get this picture:
Anybody have any idea where I make a mistake? I simply cant find where I misplaced some value. Tried a huge number of vertex declaration, changed everything in inside the shader - and now I offically ran out from ideas.
Any help is appreciated. Thank you :)
In DirectX, texture coordinates are always floats, usually a float2, but sometimes a float3 or float4 (you can specify a float1, but if you do you'll actually get a float2 in the assembly and the runtime will throw away the second channel). You're typing it on the CPU side as an int, and then in the vertex description as a float1, and finally in the shader as an int. I would recommend typing all of these as float2 to start.