SharpDX DataStream-based buffer initialization fails - c#

I am trying to create a very basic mesh renderer using D3D11 to use in my final project for school. Although I followed the basic online tutorials like the rastertek site's and Frank De Luna's book to the letter, used the simplest passthrough shader imaginable, etc, I couldn't get my triangles to show up on the screen. Finally I found out about VS 2013's graphics debugging ability, and I was able to see that my vertex and index buffers were filled with garbage data. I've hosted the solution here if you want to run the code, but can someone familiar with D3D and/or its SharpDX C# wrapper tell me what I'm doing wrong in the following code?
This is my geometry data. The Vertex struct has Vector4 position and color fields, and Index is an alias for ushort.
var vertices = new[]
{
new Vertex(new Vector4(-1, 1, 0, 1), Color.Red),
new Vertex(new Vector4(1, 1, 0, 1), Color.Green),
new Vertex(new Vector4(1, -1, 0, 1), Color.Blue),
new Vertex(new Vector4(-1, -1, 0, 1), Color.White)
};
var indices = new Index[]
{
0, 2, 1,
0, 3, 2
};
And here is the code that fails to initialize my vertex and index buffers with the above data.
var vStream = new DataStream(sizeInBytes: vertices.Length * sizeof(Vertex), canRead: false, canWrite: true);
var iStream = new DataStream(sizeInBytes: indices.Length * sizeof(Index), canRead: false, canWrite: true);
{
vStream.WriteRange(vertices);
iStream.WriteRange(indices);
vBuffer = new Buffer(
device, vStream, new BufferDescription(
vertices.Length * sizeof(Vertex),
ResourceUsage.Immutable,
BindFlags.VertexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0)) { DebugName = "Vertex Buffer" };
iBuffer = new Buffer(
device, iStream, new BufferDescription(
indices.Length * sizeof(Index),
ResourceUsage.Immutable,
BindFlags.IndexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0)) { DebugName = "Index Buffer" };
}
If I replace the above code with the following, however, it works. I have no idea what I'm doing wrong.
vBuffer = Buffer.Create(
device, vertices, new BufferDescription(
vertices.Length * sizeof(Vertex),
ResourceUsage.Immutable,
BindFlags.VertexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0));
vBuffer.DebugName = "Vertex Buffer";
iBuffer = Buffer.Create(
device, indices, new BufferDescription(
indices.Length * sizeof(Index),
ResourceUsage.Immutable,
BindFlags.IndexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0));
iBuffer.DebugName = "Index Buffer";

You need to reset the stream position to zero (like iStream.Position = 0) before passing it to new Buffer(...)

Related

Close a Mesh in EyeShot

I'm triying to merge 2 meshes in EyeShot to calculate the volume of the intersecction and difference between this solid and another one. I need the resulting mesh to be closed.
I try this to merge the meshes.
List<Point3D> lpoint3D = new List<Point3D>();            
lpoint3D.Add(new Point3D(0, 0, 10));
lpoint3D.Add(new Point3D(5,5, 10));
lpoint3D.Add(new Point3D(0, 5, 10));
lpoint3D.Add(new Point3D(5, 0, 10));
VAR_Glob.mesh2 = UtilityEx.Triangulate(lpoint3D);
List<Point3D> lpoint3D = new List<Point3D>();
lpoint3D.Add(new Point3D(0, 0, 5));
lpoint3D.Add(new Point3D(5, 5, 5));
lpoint3D.Add(new Point3D(0, 5, 5));
lpoint3D.Add(new Point3D(5, 0, 5));
VAR_Glob.mesh1 = UtilityEx.Triangulate(lpoint3D);
VAR_Glob.mesh2.MergeWith(VAR_Glob.mesh1, false, true);
But the resulting mesh is not closed. I transform the mesh to a solid but I only get the two planes separately.
Solid s2 = VAR_Glob.mesh2.ConvertToSolid();
The second parameter of this call is weldNow, please set it to true:
VAR_Glob.mesh2.MergeWith(mesh1, false, true);

Index out of bounds error when rendering mesh in Unity

I'm trying to render a hexagon in unity using these coordinates
https://qph.fs.quoracdn.net/main-qimg-9ad01ef3babb64b57d378a1558f468a7
What I'm getting, is this error:
Failed setting triangles. Some indices are referencing out of bounds vertices. IndexCount: 18, VertexCount: 7
Any ideas what is wrong with this code?
void Start()
{
MeshFilter meshFilter = gameObject.AddComponent<MeshFilter>();
Mesh mesh = new Mesh();
Vector3[] vertices = new Vector3[7]
{
new Vector3(0, 0),
new Vector3(-1.0f, 0),
new Vector3(-0.5f, Mathf.Sqrt(3/2)),
new Vector3(0.5f, Mathf.Sqrt(3/2)),
new Vector3(1, 0),
new Vector3(0.5f, - Mathf.Sqrt(3/2)),
new Vector3(-0.5f, - Mathf.Sqrt(3/2))
};
mesh.vertices = vertices;
int[] tris = new int[18]
{
0, 2, 1,
0, 3, 2,
0, 4, 3,
0, 5, 4,
0, 6, 5,
0, 7, 6
};
mesh.triangles = tris;
meshFilter.mesh = mesh;
}
The last triangle uses the vertex at index 7 which does not exist since you specified the hexagon to have only 7 points. In case you don't know, indexes start at 0 which is why this doesn't work (although by the looks of it you already know this, though you could be blindley following a tutorial which is why I said this).
It should be 1 instead of 7, since you have to loop back around and connect a triangle between index 6 (the last index) and the first perimeter index, which is index 1.

HLSL modify depth in pixel shader

I need to render an image (with depth) which I get from outside. I can construct two textures and pass them into a shader with no problem (I can verify values sampled in a pixel shader being correct).
Here's how my HLSL looks like:
// image texture
Texture2D m_TextureColor : register(t0);
// depth texture with values [0..1]
Texture2D<float> m_TextureDepth : register(t1);
// sampler to forbid linear filtering since we're dealing with pixels
SamplerState m_TextureSampler { Filter = MIN_MAG_MIP_POINT; };
struct VS_IN
{
float4 position : POSITION;
float2 texcoord : TEXCOORD;
};
struct VS_OUT
{
float4 position : SV_POSITION;
float2 texcoord : TEXCOORD0;
};
struct PS_OUT
{
float4 color : COLOR0;
float depth : DEPTH0;
};
VS_OUT VS(VS_IN input)
{
VS_OUT output = (VS_OUT)0;
output.position = input.position;
output.texcoord = input.texcoord;
return output;
}
PS_OUT PS(VS_OUT input) : SV_Target
{
PS_OUT output = (PS_OUT)0;
output.color = m_TextureColor.SampleLevel(m_TextureSampler, input.texcoord, 0);
// I want to modify depth of the pixel,
// but it looks like it has no effect on depth no matter what I set here
output.depth = m_TextureDepth.SampleLevel(m_TextureSampler, input.texcoord, 0);
return output;
}
I construct vertex buffer from those (with PrimitiveTopology.TriangleStrip) where first argument Vector4 is position and second argument Vector2 is texture coordinate:
new[]
{
new Vertex(new Vector4(-1, -1, 0.5f, 1), new Vector2(0, 1)),
new Vertex(new Vector4(-1, 1, 0.5f, 1), new Vector2(0, 0)),
new Vertex(new Vector4(1, -1, 0.5f, 1), new Vector2(1, 1)),
new Vertex(new Vector4(1, 1, 0.5f, 1), new Vector2(1, 0)),
}
Everything works just fine: I'm seeing my image, I can sample depth from depth texture and construct something visual from it (that's how I can verify that
depth values I'm sampling are correct). However I can't figure out how to modify pixel's depth so that it would be eaten properly when the depth-test would be happening. Because at the moment it all depends on what kind of z value I set as my vertex position.
This is how I'm setting up DirectX11 (I'm using SharpDX and C#):
var swapChainDescription = new SwapChainDescription
{
BufferCount = 1,
ModeDescription = new ModeDescription(bufferSize.Width, bufferSize.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = HostHandle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput,
};
var swapChainFlags = DeviceCreationFlags.None | DeviceCreationFlags.BgraSupport;
SharpDX.Direct3D11.Device.CreateWithSwapChain(DriverType.Hardware, swapChainFlags, swapChainDescription, out var device, out var swapchain);
Setting back buffer and depth/stencil buffer:
// color buffer
using (var textureColor = SwapChain.GetBackBuffer<Texture2D>(0))
{
TextureColorResourceView = new RenderTargetView(Device, textureColor);
}
// depth buffer
using (var textureDepth = new Texture2D(Device, new Texture2DDescription
{
Format = Format.D32_Float,
ArraySize = 1,
MipLevels = 1,
Width = BufferSize.Width,
Height = BufferSize.Height,
SampleDescription = new SampleDescription(1, 0),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.DepthStencil,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None
}))
{
TextureDepthResourceView = new DepthStencilView(Device, textureDepth);
}
DeviceContext.OutputMerger.SetTargets(TextureDepthResourceView, TextureColorResourceView);
Preparing depth stencil state:
var description = DepthStencilStateDescription.Default();
description.DepthComparison = Comparison.LessEqual;
description.IsDepthEnabled = true;
description.DepthWriteMask = DepthWriteMask.All;
DepthState = new DepthStencilState(Device, description);
And using it:
DeviceContext.OutputMerger.SetDepthStencilState(DepthState);
This is how I construct my color/depth textures I'm sending to shader:
public static (ShaderResourceView resource, Texture2D texture) CreateTextureDynamic(this Device device, System.Drawing.Size size, Format format)
{
var textureDesc = new Texture2DDescription
{
MipLevels = 1,
Format = format,
Width = size.Width,
Height = size.Height,
ArraySize = 1,
BindFlags = BindFlags.ShaderResource,
Usage = ResourceUsage.Dynamic,
SampleDescription = new SampleDescription(1, 0),
CpuAccessFlags = CpuAccessFlags.Write,
};
var texture = new Texture2D(device, textureDesc);
return (new ShaderResourceView(device, texture), texture);
}
Also since I need to update them frequently:
public static void UpdateResource(this Texture2D texture, int[] buffer, System.Drawing.Size size)
{
var dataBox = texture.Device.ImmediateContext.MapSubresource(texture, 0, MapMode.WriteDiscard, MapFlags.None, out var dataStream);
Parallel.For(0, size.Height, rowIndex => Marshal.Copy(buffer, size.Width * rowIndex, dataBox.DataPointer + dataBox.RowPitch * rowIndex, size.Width));
dataStream.Dispose();
texture.Device.ImmediateContext.UnmapSubresource(texture, 0);
}
public static void UpdateResource(this Texture2D texture, float[] buffer, System.Drawing.Size size)
{
var dataBox = texture.Device.ImmediateContext.MapSubresource(texture, 0, MapMode.WriteDiscard, MapFlags.None, out var dataStream);
Parallel.For(0, size.Height, rowIndex => Marshal.Copy(buffer, size.Width * rowIndex, dataBox.DataPointer + dataBox.RowPitch * rowIndex, size.Width));
dataStream.Dispose();
texture.Device.ImmediateContext.UnmapSubresource(texture, 0);
}
I also googled a lot about this, found similar posts like this: https://www.gamedev.net/forums/topic/573961-how-to-set-depth-value-in-pixel-shader/ however couldn't managed solve it on my side.
Thanks in advance!
To write to the depth buffer, you need to target the SV_Depth system-value semantic. So your pixel shader output struct would look more like the following:
struct PS_OUT
{
float4 color : SV_Target;
float depth : SV_Depth;
};
And the shader would not specify SV_Target as in your example (the SV_ outputs are defined within the struct). So it would look like:
PS_OUT PS(VS_OUT input)
{
PS_OUT output = (PS_OUT)0;
output.color = m_TextureColor.SampleLevel(m_TextureSampler, input.texcoord, 0);
// Now that output.depth is defined with SV_Depth, and you have depth-write enabled,
// this should write to the depth buffer.
output.depth = m_TextureDepth.SampleLevel(m_TextureSampler, input.texcoord, 0);
return output;
}
Note that you may incur some performance penalties on explicitly writing to depth (specifically on AMD hardware) since that forces a bypass of their early-depth hardware optimization. All future draw calls using that depth buffer will have early-Z optimizations disabled, so it's generally a good idea to perform the depth-write operation as late as possible.

Oblique projection in xna

Im trying to achieve oblique projection ( http://en.wikipedia.org/wiki/Oblique_projection ) in the xna framework:
float cos = (float)Math.Cos(DegreeToRadian(45)) * -1;
float sin = (float)Math.Sin(DegreeToRadian(45)) * -1;
Matrix obliqueProjection = new Matrix(
1, 0, cos, 0,
0, 1, sin, 0,
0, 0, 1, 0,
0, 0, 0, 1);
Matrix orthographicProjection = Matrix.CreateOrthographic(10, 10, -1, 100000);
projection = orthographicProjection*obliqueProjection;
As you can see im just multiplying orthographic with oblique projection.
What i get is this:
http://imageshack.us/photo/my-images/835/oblique1.png/
Its basically what orthographic projection would look like, but with some weird far clipping.
How can i achieve proper oblique projection?
Thx in advance
Answered by Diki: http://forums.create.msdn.com/forums/p/85032/513412.aspx#513412
Code needs to be changed like this:
Matrix obliqueProjection = new Matrix(
1, 0, 0, 0,
0, 1, 0, 0,
cos, sin, 1, 0,
0, 0, 0, 1);
projection = obliqueProjection * orthographicProjection;
For starters, you can implement the proper formula.
The wikipedia article says the projection matrix uses 0.5 * cos and 0.5 * sin while your version uses just cos and sin.

SlimDX Direct3D 11 Indexing Problems

I'm trying to draw an indexed square using SlimDX and Direct3D11. I've managed to draw a square without indices, but when I swap to my indexed version I just get a blank screen.
My input layout is set to only take position data (I'm essentially extending from the third tutorial on the SlimDX website) and to draw Triangle Lists.
My render loop code is as follows (I am using the triangle.fx pixel and vertex shader files from the tutorial, they take vertex positions (in screen coordinates) and paint them yellow, D3D is shorthand for SlimDX.Direct3D11)
//clear the render target
context.ClearRenderTargetView(renderTarget, new Color4(0.5f, 0.5f, 1.0f));
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(mesh.VertexBuffer,12, 0));
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UNorm, 0);
context.DrawIndexed(mesh.indices, 0, 0);
swapChain.Present(0, PresentFlags.None);
"mesh" is a struct that holds a Vertex buffer, Index buffer and vertex count. The data is filled here:
Vertex[] vertexes = new Vertex[4];
vertexes[0].Position = new Vector3(0, 0, 0.5f);
vertexes[1].Position = new Vector3(0, 0.5f, 0.5f);
vertexes[2].Position = new Vector3(0.5f, 0, 0.5f);
vertexes[3].Position = new Vector3(0.5f, 0.5f, 0.5f);
UInt16[] indexes = { 0, 1, 2, 1, 3, 2 };
DataStream vertices = new DataStream(12 * 4, true, true);
foreach (Vertex vertex in vertexes)
{
vertices.Write(vertex.Position);
}
vertices.Position = 0;
DataStream indices = new DataStream(sizeof(int) * 6, true, true);
foreach (UInt16 index in indexes)
{
indices.Write(index);
}
indices.Position = 0;
mesh = new Mesh();
D3D.Buffer vertexBuffer = new D3D.Buffer(device, vertices, 12 * 4, ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
mesh.VertexBuffer = vertexBuffer;
mesh.IndexBuffer = new D3D.Buffer(device, indices, 2 * 6, ResourceUsage.Default, BindFlags.IndexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
mesh.vertices = vertexes.GetLength(0);
mesh.indices = indexes.Length;
All of this is nearly identical to my unindexed square method (with the addition of index buffers and indices, and the removal of two duplicate vertices that aren't needed with indexing), but while the unindexed method draws a square, the indexed method doesn't.
My current theory is that there is either something wrong with this line:
mesh.IndexBuffer = new D3D.Buffer(device, indices, 2 * 6, ResourceUsage.Default, BindFlags.IndexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
Or these lines:
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UNorm, 0);
context.DrawIndexed(mesh.indices, 0, 0);
Why don't you just use a vertex and indexbuffer for this simple example?
Like this way (Directx9):
VertexBuffer vb;
IndexBuffer ib;
vertices = new PositionColored[WIDTH * HEIGHT];
//vertex creation
vb = new VertexBuffer(device, HEIGHT * WIDTH * PositionColored.SizeInBytes, Usage.WriteOnly, PositionColored.Format, Pool.Default);
DataStream stream = vb.Lock(0, 0, LockFlags.None);
stream.WriteRange(vertices);
vb.Unlock();
indices = new short[(WIDTH - 1) * (HEIGHT - 1) * 6];
//indicies creation
ib = new IndexBuffer(device, sizeof(int) * (WIDTH - 1) * (HEIGHT - 1) * 6, Usage.WriteOnly, Pool.Default, false);
DataStream stream = ib.Lock(0, 0, LockFlags.None);
stream.WriteRange(indices);
ib.Unlock();
//Drawing
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.DarkSlateBlue, 1.0f, 0);
device.BeginScene();
device.VertexFormat = PositionColored.Format;
device.SetStreamSource(0, vb, 0, PositionColored.SizeInBytes);
device.Indices = ib;
device.SetTransform(TransformState.World, Matrix.Translation(-HEIGHT / 2, -WIDTH / 2, 0) * Matrix.RotationZ(angle));
device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, WIDTH * HEIGHT, 0, indices.Length / 3);
device.EndScene();
device.Present();
I use the mesh in another way (directx9 code again):
private void CreateMesh()
{
meshTerrain = new Mesh(device, (WIDTH - 1) * (HEIGHT - 1) * 2, WIDTH * HEIGHT, MeshFlags.Managed, PositionColored.Format);
DataStream stream = meshTerrain.VertexBuffer.Lock(0, 0, LockFlags.None);
stream.WriteRange(vertices);
meshTerrain.VertexBuffer.Unlock();
stream.Close();
stream = meshTerrain.IndexBuffer.Lock(0, 0, LockFlags.None);
stream.WriteRange(indices);
meshTerrain.IndexBuffer.Unlock();
stream.Close();
meshTerrain.GenerateAdjacency(0.5f);
meshTerrain.OptimizeInPlace(MeshOptimizeFlags.VertexCache);
meshTerrain = meshTerrain.Clone(device, MeshFlags.Dynamic, PositionNormalColored.Format);
meshTerrain.ComputeNormals();
}
//Drawing
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.DarkSlateBlue, 1.0f, 0);
device.BeginScene();
device.VertexFormat = PositionColored.Format;
device.SetTransform(TransformState.World, Matrix.Translation(-HEIGHT / 2, -WIDTH / 2, 0) * Matrix.RotationZ(angle));
int numSubSets = meshTerrain.GetAttributeTable().Length;
for (int i = 0; i < numSubSets; i++)
{
meshTerrain.DrawSubset(i);
}
device.EndScene();
device.Present();

Categories

Resources