SlimDX Direct3D 11 Indexing Problems - c#

I'm trying to draw an indexed square using SlimDX and Direct3D11. I've managed to draw a square without indices, but when I swap to my indexed version I just get a blank screen.
My input layout is set to only take position data (I'm essentially extending from the third tutorial on the SlimDX website) and to draw Triangle Lists.
My render loop code is as follows (I am using the triangle.fx pixel and vertex shader files from the tutorial, they take vertex positions (in screen coordinates) and paint them yellow, D3D is shorthand for SlimDX.Direct3D11)
//clear the render target
context.ClearRenderTargetView(renderTarget, new Color4(0.5f, 0.5f, 1.0f));
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(mesh.VertexBuffer,12, 0));
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UNorm, 0);
context.DrawIndexed(mesh.indices, 0, 0);
swapChain.Present(0, PresentFlags.None);
"mesh" is a struct that holds a Vertex buffer, Index buffer and vertex count. The data is filled here:
Vertex[] vertexes = new Vertex[4];
vertexes[0].Position = new Vector3(0, 0, 0.5f);
vertexes[1].Position = new Vector3(0, 0.5f, 0.5f);
vertexes[2].Position = new Vector3(0.5f, 0, 0.5f);
vertexes[3].Position = new Vector3(0.5f, 0.5f, 0.5f);
UInt16[] indexes = { 0, 1, 2, 1, 3, 2 };
DataStream vertices = new DataStream(12 * 4, true, true);
foreach (Vertex vertex in vertexes)
{
vertices.Write(vertex.Position);
}
vertices.Position = 0;
DataStream indices = new DataStream(sizeof(int) * 6, true, true);
foreach (UInt16 index in indexes)
{
indices.Write(index);
}
indices.Position = 0;
mesh = new Mesh();
D3D.Buffer vertexBuffer = new D3D.Buffer(device, vertices, 12 * 4, ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
mesh.VertexBuffer = vertexBuffer;
mesh.IndexBuffer = new D3D.Buffer(device, indices, 2 * 6, ResourceUsage.Default, BindFlags.IndexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
mesh.vertices = vertexes.GetLength(0);
mesh.indices = indexes.Length;
All of this is nearly identical to my unindexed square method (with the addition of index buffers and indices, and the removal of two duplicate vertices that aren't needed with indexing), but while the unindexed method draws a square, the indexed method doesn't.
My current theory is that there is either something wrong with this line:
mesh.IndexBuffer = new D3D.Buffer(device, indices, 2 * 6, ResourceUsage.Default, BindFlags.IndexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
Or these lines:
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UNorm, 0);
context.DrawIndexed(mesh.indices, 0, 0);

Why don't you just use a vertex and indexbuffer for this simple example?
Like this way (Directx9):
VertexBuffer vb;
IndexBuffer ib;
vertices = new PositionColored[WIDTH * HEIGHT];
//vertex creation
vb = new VertexBuffer(device, HEIGHT * WIDTH * PositionColored.SizeInBytes, Usage.WriteOnly, PositionColored.Format, Pool.Default);
DataStream stream = vb.Lock(0, 0, LockFlags.None);
stream.WriteRange(vertices);
vb.Unlock();
indices = new short[(WIDTH - 1) * (HEIGHT - 1) * 6];
//indicies creation
ib = new IndexBuffer(device, sizeof(int) * (WIDTH - 1) * (HEIGHT - 1) * 6, Usage.WriteOnly, Pool.Default, false);
DataStream stream = ib.Lock(0, 0, LockFlags.None);
stream.WriteRange(indices);
ib.Unlock();
//Drawing
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.DarkSlateBlue, 1.0f, 0);
device.BeginScene();
device.VertexFormat = PositionColored.Format;
device.SetStreamSource(0, vb, 0, PositionColored.SizeInBytes);
device.Indices = ib;
device.SetTransform(TransformState.World, Matrix.Translation(-HEIGHT / 2, -WIDTH / 2, 0) * Matrix.RotationZ(angle));
device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, WIDTH * HEIGHT, 0, indices.Length / 3);
device.EndScene();
device.Present();
I use the mesh in another way (directx9 code again):
private void CreateMesh()
{
meshTerrain = new Mesh(device, (WIDTH - 1) * (HEIGHT - 1) * 2, WIDTH * HEIGHT, MeshFlags.Managed, PositionColored.Format);
DataStream stream = meshTerrain.VertexBuffer.Lock(0, 0, LockFlags.None);
stream.WriteRange(vertices);
meshTerrain.VertexBuffer.Unlock();
stream.Close();
stream = meshTerrain.IndexBuffer.Lock(0, 0, LockFlags.None);
stream.WriteRange(indices);
meshTerrain.IndexBuffer.Unlock();
stream.Close();
meshTerrain.GenerateAdjacency(0.5f);
meshTerrain.OptimizeInPlace(MeshOptimizeFlags.VertexCache);
meshTerrain = meshTerrain.Clone(device, MeshFlags.Dynamic, PositionNormalColored.Format);
meshTerrain.ComputeNormals();
}
//Drawing
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.DarkSlateBlue, 1.0f, 0);
device.BeginScene();
device.VertexFormat = PositionColored.Format;
device.SetTransform(TransformState.World, Matrix.Translation(-HEIGHT / 2, -WIDTH / 2, 0) * Matrix.RotationZ(angle));
int numSubSets = meshTerrain.GetAttributeTable().Length;
for (int i = 0; i < numSubSets; i++)
{
meshTerrain.DrawSubset(i);
}
device.EndScene();
device.Present();

Related

Meshes created from code don't have a position unity

So, I tried to create a grid so that I can instantiate objects on it. I check for the position of said hit object (one of the squares I created) and then set the instantiated object to that position. Problem is, the squares I created with code don't have a position and are all set to 0, 0, 0.
{
GameObject tileObject = new GameObject(string.Format("{0}, {1}", x, y));
tileObject.transform.parent = transform;
Mesh mesh = new Mesh();
tileObject.AddComponent<MeshFilter>().mesh = mesh;
tileObject.AddComponent<MeshRenderer>().material = tileMaterial;
Vector3[] vertices = new Vector3[4];
vertices[0] = new Vector3(x * tileSize, 0, y * tileSize);
vertices[1] = new Vector3(x * tileSize, 0, (y +1) * tileSize);
vertices[2] = new Vector3((x +1) * tileSize, 0, y * tileSize);
vertices[3] = new Vector3((x +1) * tileSize, 0, (y +1) * tileSize);
int[] tris = new int[] { 0, 1, 2, 1, 3, 2 };
mesh.vertices = vertices;
mesh.triangles = tris;
mesh.RecalculateNormals();
tileObject.layer = LayerMask.NameToLayer("Tile");
tileObject.AddComponent<BoxCollider>();
//var xPos = Mathf.Round(x);
//var yPos = Mathf.Round(y);
//tileObject.gameObject.transform.position = new Vector3(xPos , 0f, yPos);
return tileObject;
}```
As said your issue is that you leave all tiles on the position 0,0,0 and only set their vertices to the desired world space positions.
You would rather want to keep your vertices local like e.g.
// I would use the offset of -0.5f so the mesh is centered at the transform pivot
// Also no need to recreate the arrays everytime, you can simply reference the same ones
private readonly Vector3[] vertices = new Vector3[4]
{
new Vector3(-0.5f, 0, -0.5f);
new Vector3(-0.5f, 0, 0.5f);
new Vector3(0.5f, 0, -0.5f);
new Vector3(0.5f, 0, 0.5f);
};
private readonly int[] tris = new int[] { 0, 1, 2, 1, 3, 2 };
and then in your method do
GameObject tileObject = new GameObject($"{x},{y}");
tileObject.transform.parent = transform;
tileObject.localScale = new Vector3 (tileSize, 1, tileSize);
tileObject.localPosition = new Vector3(x * tileSize, 0, y * tileSize);
The latter depends of course on your needs. Actually I would prefer to have the tiles also centered around the grid object so something like e.g.
// The "-0.5f" is for centering the tile itself correctly
// The "-gridWith/2f" makes the entire grid centered around the parent
tileObject.localPosition = new Vector3((x - 0.5f - gridWidth/2f) * tileSize, 0, (y - 0.5f - gridHeight/2f) * tileSize);
In order to later find out which tile you are standing on (e.g. via raycasts, collisions, etc) I would then rather use a dedicated component and simply tell it it's coordinates like e.g.
// Note that Tile is a built-in type so you would want to avoid confusion
public class MyTile : MonoBehaviour
{
public Vector2Int GridPosition;
}
and then while generating your grid you would simply add
var tile = tileObject.AddComponent<MyTile>();
tile.GridPosition = new Vector2Int(x,y);
while you can still also access its transform.position to get the actual world space center of the tiles

OpenGL - Geometry shader shadow mapping pass performing terribly

I'm calculating shadows for a number of point lights using Variance Shadow Mapping. All 6 faces of the cubemap are rendered in a single pass with a geometry shader, this repeats for each light source, and the whole lot is stored in a cubemap array. This all runs fine, 16 lights at 60fps no problem.
Chasing further optimisation, I tried to move the entire process to a single geometry shader pass, only to hit the only 113 vertex output limit of my hardware. Out of curiosity I decided to render 4 lights only (72 emitted vertices) and to my surprise it dropped to 24fps.
So why is it that 16 lights with 16 render passes perform significantly better than 4 lights in a single pass?
The code is essentially identical.
#version 400 core
layout(triangles) in;
layout (triangle_strip, max_vertices=18) out;
uniform int lightID;
out vec4 frag_position;
uniform mat4 projectionMatrix;
uniform mat4 shadowTransforms[6];
void main()
{
for(int face = 0; face < 6; face++)
{
gl_Layer = face + (lightID * 6);
for(int i=0; i<3; i++)
{
frag_position = shadowTransforms[face] * gl_in[i].gl_Position;
gl_Position = projectionMatrix * shadowTransforms[face] * gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
}
versus
#version 400 core
layout(triangles) in;
layout (triangle_strip, max_vertices=72) out;
out vec4 frag_position;
uniform mat4 projectionMatrix;
uniform mat4 shadowTransforms[24];
void main()
{
for (int lightSource = 0; lightSource < 4; lightSource++)
{
for(int face = 0; face < 6; face++)
{
gl_Layer = face + (lightSource * 6);
for(int i=0; i<3; i++)
{
frag_position = shadowTransforms[gl_Layer] * gl_in[i].gl_Position;
gl_Position = projectionMatrix * shadowTransforms[gl_Layer] * gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
}
}
And
public void ShadowMapsPass(Shader shader)
{
// Setup
GL.UseProgram(shader.ID);
GL.Viewport(0, 0, CubeMapArray.size, CubeMapArray.size);
// Clear the cubemarray array data from the previous frame
GL.BindFramebuffer(FramebufferTarget.Framebuffer, shadowMapArray.FBO_handle);
GL.ClearColor(Color.White);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
for (int j = 0; j < lights.Count; j++)
{
// Create the light's view matrices
List<Matrix4> shadowTransforms = new List<Matrix4>();
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(1, 0, 0), new Vector3(0, -1, 0)));
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(-1, 0, 0), new Vector3(0, -1, 0)));
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(0, 1, 0), new Vector3(0, 0, 1)));
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(0, -1, 0), new Vector3(0, 0, -1)));
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(0, 0, 1), new Vector3(0, -1, 0)));
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(0, 0, -1), new Vector3(0, -1, 0)));
// Send uniforms to the shader
for (int i = 0; i < 6; i++)
{
Matrix4 shadowTransform = shadowTransforms[i];
GL.UniformMatrix4(shader.getUniformID("shadowTransforms[" + i + "]"), false, ref shadowTransform);
}
GL.Uniform1(shader.getUniformID("lightID"), j);
DrawScene(shader, false);
}
}
versus
public void ShadowMapsPass(Shader shader)
{
// Setup
GL.UseProgram(shader.ID);
GL.Viewport(0, 0, CubeMapArray.size, CubeMapArray.size);
// Clear the cubemarray array data from the previous frame
GL.BindFramebuffer(FramebufferTarget.Framebuffer, shadowMapArray.FBO_handle);
GL.ClearColor(Color.White);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
// Create the light's view matrices
List<Matrix4> shadowTransforms = new List<Matrix4>();
for (int j = 0; j < lights.Count; j++)
{
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(1, 0, 0), new Vector3(0, -1, 0)));
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(-1, 0, 0), new Vector3(0, -1, 0)));
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(0, 1, 0), new Vector3(0, 0, 1)));
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(0, -1, 0), new Vector3(0, 0, -1)));
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(0, 0, 1), new Vector3(0, -1, 0)));
shadowTransforms.Add(Matrix4.LookAt(lights[j].position, lights[j].position + new Vector3(0, 0, -1), new Vector3(0, -1, 0)));
}
// Send uniforms to the shader
for (int i = 0; i < shadowTransforms.Count; i++)
{
Matrix4 shadowTransform = shadowTransforms[i];
GL.UniformMatrix4(shader.getUniformID("shadowTransforms[" + i + "]"), false, ref shadowTransform);
}
DrawScene(shader, false);
}
I'd guess fewer opportunities for parallel code execution in the second form. The first version of the geometry shader generates 18 vertices and must be executed 4 times, but those 4 executions can run in parallel. The second version generates 72 vertices one after the other.

Sprite Sheet Animation in OpenTK

I have been trying to make my first ever game engine (in OpenTK and Im stuck. I am trying to incoperate animations and I'm trying to use sprite sheets because they would greatly decrease filesize.
I know that I need to use a for loop to draw all the sprites in sequence, but I dont have a clue about what I should do to make it work with spritesheets.
Here is my current drawing code:
//The Basis fuction for all drawing done in the application
public static void Draw (Texture2D texture, Vector2 position, Vector2 scale, Color color, Vector2 origin, RectangleF? SourceRec = null)
{
Vector2[] vertices = new Vector2[4]
{
new Vector2(0, 0),
new Vector2(1, 0),
new Vector2(1, 1),
new Vector2(0, 1),
};
GL.BindTexture(TextureTarget.Texture2D, texture.ID);
//Beginning to draw on the screen
GL.Begin(PrimitiveType.Quads);
GL.Color3(color);
for (int i = 0; i < 4; i++)
{
GL.TexCoord2((SourceRec.Value.Left + vertices[i].X * SourceRec.Value.Width) / texture.Width, (SourceRec.Value.Left + vertices[i].Y * SourceRec.Value.Height) / texture.Height);
vertices[i].X *= texture.Width;
vertices[i].Y *= texture.Height;
vertices[i] -= origin;
vertices[i] *= scale;
vertices[i] += position;
GL.Vertex2(vertices[i]);
}
GL.End();
}
public static void Begin(int screenWidth, int screenHeight)
{
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(-screenWidth / 2, screenWidth / 2, screenHeight / 2, -screenHeight / 2, 0f, 1f);
}
Thanks in advance!!! :)
Firstly, I beg you, abandon the fixed function pipeline. It blinds you and restricts your understanding and creativity. Move to core 3.1+, it's structural perfection is matched only by it's potentiality, and the collaborative forethought and innovation will truly move you.
Now, you'll want to store your sprite sheet images as a Texture2DArray. It took me some time to figure out how to load these correctly:
public SpriteSheet(string filename, int spriteWidth, int spriteHeight)
{
// Assign ID and get name
this.textureID = GL.GenTexture();
this.spriteSheetName = Path.GetFileNameWithoutExtension(filename);
// Bind the Texture Array and set appropriate parameters
GL.BindTexture(TextureTarget.Texture2DArray, textureID);
GL.TexParameter(TextureTarget.Texture2DArray, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Nearest);
GL.TexParameter(TextureTarget.Texture2DArray, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Nearest);
GL.TexParameter(TextureTarget.Texture2DArray, TextureParameterName.TextureWrapS, (int)TextureWrapMode.ClampToEdge);
GL.TexParameter(TextureTarget.Texture2DArray, TextureParameterName.TextureWrapT, (int)TextureWrapMode.ClampToEdge);
// Load the image file
Bitmap image = new Bitmap(#"Tiles/" + filename);
BitmapData data = image.LockBits(new System.Drawing.Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
// Determine columns and rows
int spriteSheetwidth = image.Width;
int spriteSheetheight = image.Height;
int columns = spriteSheetwidth / spriteWidth;
int rows = spriteSheetheight / spriteHeight;
// Allocate storage
GL.TexStorage3D(TextureTarget3d.Texture2DArray, 1, SizedInternalFormat.Rgba8, spriteWidth, spriteHeight, rows * columns);
// Split the loaded image into individual Texture2D slices
GL.PixelStore(PixelStoreParameter.UnpackRowLength, spriteSheetwidth);
for (int i = 0; i < columns * rows; i++)
{
GL.TexSubImage3D(TextureTarget.Texture2DArray,
0, 0, 0, i, spriteWidth, spriteHeight, 1,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte,
data.Scan0 + (spriteWidth * 4 * (i % columns)) + (spriteSheetwidth * 4 * spriteHeight * (i / columns))); // 4 bytes in an Bgra value.
}
image.UnlockBits(data);
}
Then you can simply draw a quad, and the Z texture co-ordinate is the Texture2D index in the Texture2DArray. Note frag_texcoord is of type vec3.
#version 330 core
in vec3 frag_texcoord;
out vec4 outColor;
uniform sampler2DArray tex;
void main()
{
outColor = texture(tex, vec3(frag_texcoord));
}

Rendering triangles with OpenTK from VBO

I cannot render triangles for the life of me with a VBO in OpenTK. I am loading my data to the VBO in glControl_Load() event. I get a background screen with no triangles when running. The data is a from a mesh m.OpenGLArrays(out data, out indices) outputs a list of floats and ints. The list of floats for the vertices T1v1, T1v2, T1v3, T2v1, T2v2, T2v3, .... , all three vertices for each triangle back to back.
However given a blank screen with the code when I comment the "intermediate" rendering code everything renders fine....??? What am I doing wrong?
private void glControl_Load(object sender, EventArgs e)
{
loaded = true;
glControl.MouseMove += new MouseEventHandler(glControl_MouseMove);
glControl.MouseWheel += new MouseEventHandler(glControl_MouseWheel);
GL.ClearColor(Color.DarkSlateGray);
GL.Color3(1f, 1f, 1f);
m.OpenGLArrays(out data, out indices);
this.indicesSize = (uint)indices.Length;
GL.GenBuffers(1, out VBOid[0]);
GL.GenBuffers(1, out VBOid[1]);
SetupViewport();
}
private void SetupViewport()
{
if (this.WindowState == FormWindowState.Minimized) return;
glControl.Width = this.Width - 32;
glControl.Height = this.Height - 80;
Frame_label.Location = new System.Drawing.Point(glControl.Width / 2, glControl.Height + 25);
GL.MatrixMode(MatrixMode.Projection);
//GL.LoadIdentity();
GL.Ortho(0, glControl.Width, 0, glControl.Height, -1, 1); // Bottom-left corner pixel has coordinate (0, 0)
GL.Viewport(0, 0, glControl.Width, glControl.Height); // Use all of the glControl painting area
GL.Enable(EnableCap.DepthTest);
GL.BindBuffer(BufferTarget.ArrayBuffer, VBOid[0]);
GL.BufferData(BufferTarget.ArrayBuffer, (IntPtr)(data.Length * sizeof(float)), data, BufferUsageHint.StaticDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
float aspect_ratio = this.Width / (float)this.Height;
projection = Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspect_ratio, 1, 1024);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadMatrix(ref projection);
}
private void glControl_Paint(object sender, PaintEventArgs e)
{
if (loaded)
{
GL.Clear(ClearBufferMask.ColorBufferBit |
ClearBufferMask.DepthBufferBit |
ClearBufferMask.StencilBufferBit);
modelview = Matrix4.LookAt(0f, 0f, -200f + zoomFactor, 0, 0, 0, 0.0f, 1.0f, 0.0f);
var aspect_ratio = Width / (float)Height;
projection = Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, aspect_ratio, 1, 512);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadMatrix(ref projection);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadMatrix(ref modelview);
GL.Rotate(angleY, 1.0f, 0, 0);
GL.Rotate(angleX, 0, 1.0f, 0);
GL.EnableClientState(ArrayCap.VertexArray);
GL.BindBuffer(BufferTarget.ArrayBuffer, VBOid[0]);
GL.Color3(Color.Yellow);
GL.VertexPointer(3, VertexPointerType.Float, Vector3.SizeInBytes, new IntPtr(0));
GL.DrawArrays(PrimitiveType.Triangles, 0, data.Length);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
GL.DisableClientState(ArrayCap.VertexArray);
//GL.Color3(Color.Yellow);
//GL.PolygonMode(MaterialFace.Front, PolygonMode.Fill);
//GL.Begin(PrimitiveType.Triangles);
//for (int i = 0; i < this.md.mesh.Count; i++)
//{
// GL.Normal3(this.md.mesh[i].normal);
// GL.Vertex3(this.md.mesh[i].vertices[0]);
// GL.Vertex3(this.md.mesh[i].vertices[1]);
// GL.Vertex3(this.md.mesh[i].vertices[2]);
//}
//GL.End();
//GL.EndList();
glControl.SwapBuffers();
Frame_label.Text = "Frame: " + frameNum++;
}
}
If something doesn't seem right, then it probably isn't. I seriously questioned my understanding of opengl and spent hours looking at this. However it was just a simple error of forgetting to iterate a count variable in a for loop to transfer the mesh from one object to another. Each triangle had identical vertices! Always expect the unexpected when it comes to debugging!

SharpDX DataStream-based buffer initialization fails

I am trying to create a very basic mesh renderer using D3D11 to use in my final project for school. Although I followed the basic online tutorials like the rastertek site's and Frank De Luna's book to the letter, used the simplest passthrough shader imaginable, etc, I couldn't get my triangles to show up on the screen. Finally I found out about VS 2013's graphics debugging ability, and I was able to see that my vertex and index buffers were filled with garbage data. I've hosted the solution here if you want to run the code, but can someone familiar with D3D and/or its SharpDX C# wrapper tell me what I'm doing wrong in the following code?
This is my geometry data. The Vertex struct has Vector4 position and color fields, and Index is an alias for ushort.
var vertices = new[]
{
new Vertex(new Vector4(-1, 1, 0, 1), Color.Red),
new Vertex(new Vector4(1, 1, 0, 1), Color.Green),
new Vertex(new Vector4(1, -1, 0, 1), Color.Blue),
new Vertex(new Vector4(-1, -1, 0, 1), Color.White)
};
var indices = new Index[]
{
0, 2, 1,
0, 3, 2
};
And here is the code that fails to initialize my vertex and index buffers with the above data.
var vStream = new DataStream(sizeInBytes: vertices.Length * sizeof(Vertex), canRead: false, canWrite: true);
var iStream = new DataStream(sizeInBytes: indices.Length * sizeof(Index), canRead: false, canWrite: true);
{
vStream.WriteRange(vertices);
iStream.WriteRange(indices);
vBuffer = new Buffer(
device, vStream, new BufferDescription(
vertices.Length * sizeof(Vertex),
ResourceUsage.Immutable,
BindFlags.VertexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0)) { DebugName = "Vertex Buffer" };
iBuffer = new Buffer(
device, iStream, new BufferDescription(
indices.Length * sizeof(Index),
ResourceUsage.Immutable,
BindFlags.IndexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0)) { DebugName = "Index Buffer" };
}
If I replace the above code with the following, however, it works. I have no idea what I'm doing wrong.
vBuffer = Buffer.Create(
device, vertices, new BufferDescription(
vertices.Length * sizeof(Vertex),
ResourceUsage.Immutable,
BindFlags.VertexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0));
vBuffer.DebugName = "Vertex Buffer";
iBuffer = Buffer.Create(
device, indices, new BufferDescription(
indices.Length * sizeof(Index),
ResourceUsage.Immutable,
BindFlags.IndexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0));
iBuffer.DebugName = "Index Buffer";
You need to reset the stream position to zero (like iStream.Position = 0) before passing it to new Buffer(...)

Categories

Resources