XNA Hardware Instancing: Mesh not rendered completely - c#

I have implemented basic Hardware model instancing method in XNA code by following this short tutorial:
http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/
I have created the needed shader (without texture atlas though, single texture only) and I am trying to use this method to draw a simple tree I generated using 3DS Max 2013 and exported via FBX format.
The results I'm seeing left me without clue as to what is going on.
Back when I was using no instancing methods, but simply calling Draw on a mesh (for every tree on a level), the whole tree was shown:
I have made absolutely sure that the Model contains only one Mesh and that Mesh contains only one MeshPart.
I am using Vertex Extraction method, by using Model's Vertex and Index Buffer "GetData<>()" method, and correct number of vertices and indices, hence, correct number of primitives is rendered. Correct texture coordinates and Normals for lighting are also extracted, as is visible by the part of the tree that is being rendered.
Also the parts of the tree are also in their correct places as well.
They are simply missing some 1000 or so polygons for absolutely no reason what so ever. I have break-pointed at every step of vertex extraction and shader's parameter generation, and I cannot for the life of me figure out what am I doing wrong.
My Shader's Vertex Transformation function:
VertexShaderOutput VertexShaderFunction2(VertexShaderInput IN, float4x4 instanceTransform : TEXCOORD1)
{
VertexShaderOutput output;
float4 worldPosition = mul(IN.Position, transpose(instanceTransform));
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.texCoord = IN.texCoord;
output.Normal = IN.Normal;
return output;
}
Vertex bindings and index buffer generation:
instanceBuffer = new VertexBuffer(Game1.graphics.GraphicsDevice, Core.VertexData.InstanceVertex.vertexDeclaration, counter, BufferUsage.WriteOnly);
instanceVertices = new Core.VertexData.InstanceVertex[counter];
for (int i = 0; i < counter; i++)
{
instanceVertices[i] = new Core.VertexData.InstanceVertex(locations[i]);
}
instanceBuffer.SetData(instanceVertices);
bufferBinding[0] = new VertexBufferBinding(vBuffer, 0, 0);
bufferBinding[1] = new VertexBufferBinding(instanceBuffer, 0, 1);
Vertex extraction method used to get all vertex info (this part I'm sure works correctly as I have used it before to load test geometric shapes into levels, like boxes, spheres, etc for testing various shaders, and constructing bounding boxes around them using extracted vertex data, and it is all correct):
public void getVertexData(ModelMeshPart part)
{
modelVertices = new VertexPositionNormalTexture[part.NumVertices];
rawData = new Vector3[modelVertices.Length];
modelIndices32 = new uint[rawData.Length];
modelIndices16 = new ushort[rawData.Length];
int stride = part.VertexBuffer.VertexDeclaration.VertexStride;
VertexPositionNormalTexture[] vertexData = new VertexPositionNormalTexture[part.NumVertices];
part.VertexBuffer.GetData(part.VertexOffset * stride, vertexData, 0, part.NumVertices, stride);
if (part.IndexBuffer.IndexElementSize == IndexElementSize.ThirtyTwoBits)
part.IndexBuffer.GetData<uint>(modelIndices32);
if (part.IndexBuffer.IndexElementSize == IndexElementSize.SixteenBits)
part.IndexBuffer.GetData<ushort>(modelIndices16);
for (int i = 0; i < modelVertices.Length; i++)
{
rawData[i] = vertexData[i].Position;
modelVertices[i].Position = rawData[i];
modelVertices[i].TextureCoordinate = vertexData[i].TextureCoordinate;
modelVertices[i].Normal = vertexData[i].Normal;
counter++;
}
}
This is the rendering code for the object batch (trees in this particular case):
public void RenderHW()
{
Game1.graphics.GraphicsDevice.RasterizerState = rState;
treeBatchShader.CurrentTechnique.Passes[0].Apply();
Game1.graphics.GraphicsDevice.SetVertexBuffers(bufferBinding);
Game1.graphics.GraphicsDevice.Indices = iBuffer;
Game1.graphics.GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 0, 0, treeMesh.Length, 0, primitive , counter);
Game1.graphics.GraphicsDevice.RasterizerState = rState2;
}
If anybody has any idea where to even start looking for errors, just post all ideas that come to mind, as I'm completely stumped as to what's going on.
This even counters all my previous experience where I'd mess something up in shader code or vertex generation, you'd get some absolute mess on your screen - numerous graphical artifacts such as elongated triangles originating where mesh should be, but one tip stretching back to (0,0,0), black texture, incorrect positioning (often outside skybox or below terrain), incorrect scaling...
This is something different, almost as if it works - the part of the tree that is visible is correct in every single aspect (location, rotation, scale, texture, shading), except that a part is missing. What makes it weirder for me is that the part missing is seemingly logically segmented: Only tree trunk's primitives, and some leaves off the lowest branches of the tree are missing, leaving all other primitives correctly rendered with no artifacts. Basically, they're... correctly missing.

Solved. Of course it was the one part I was 100% sure it was correct while it was not.
modelIndices32 = new uint[rawData.Length];
modelIndices16 = new ushort[rawData.Length];
Change that into:
modelIndices32 = new uint[part.IndexBuffer.IndexCount];
modelIndices16 = new ushort[part.IndexBuffer.IndexCount];
Now I have to just figure out why are 3 draw calls rendering 300 trees slower than 300 draw calls rendering 1 tree each (i.e. why did I waste entire afternoon creating a new problem).

Related

aruco.net - How to find marker orientation

I am trying to use openCV.NET to read scanned forms. The problem is that sometimes the positions of the relevant regions of interest and the alignment may differ depending on the printer it was printed form and the way the user scanned the form.
So I thought I could use an ArUco marker as a reference point as there are libraries (ArUco.NET) already built to recognize them. I was hoping to find out how much the ArUco code is rotated and then rotate the form backwards by that amount to make sure the text is straight. Then I can use the center of the ArUco code as a reference point to use OCR on specific regions on the form.
I am using the following code to get the OpenGL modelViewMatrix. However, it always seems to be the same numbers no matter which angle the ArUco code is rotated. I only just started with all of these libraries but I thought that the modelViewMatrix would give me different values depending on the rotation of the marker. Why would it always be the same?
Mat cameraMatrix = new Mat(3, 3, Depth.F32, 1);
Mat distortion = new Mat(1, 4, Depth.F32, 1);
using (Mat image2 = OpenCV.Net.CV.LoadImageM("./image.tif", LoadImageFlags.Grayscale))
{
using (var detector = new MarkerDetector())
{
detector.ThresholdMethod = ThresholdMethod.AdaptiveThreshold;
detector.Param1 = 7.0;
detector.Param2 = 7.0;
detector.MinSize = 0.01f;
detector.MaxSize = 0.5f;
detector.CornerRefinement = CornerRefinementMethod.Lines;
var markerSize = 10;
IList<Marker> detectedMarkers = detector.Detect(image2, cameraMatrix, distortion);
foreach (Marker marker in detectedMarkers)
{
Console.WriteLine("Detected a marker top left at: " + marker[0].X + #" " + marker[0].Y);
//Upper 3x3 matrix of modelview matrix (0,4,8,1,5,9,2,6,10) is called rotation matrix.
double[] modelViewMatrix = marker.GetGLModelViewMatrix();
}
}
}
It looks like you have not initialized your camera parameters.
cameraMatrix and distortion are the intrinsic parameters of your camera. You can use OpenCV to find them.
This is vor OpenCV 2.4 but will help you to understand the basics:
http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
If you have found them you should be able to get the parameters.

glDrawElements doesn't draw the second time it's called, using the same vertex array object and the same shader program

I'm very new to opengl and I'm getting this really unexpected behavior.
I'm implementing a basic scene graph and trying to add the ability to reuse shaders and mesh data.
I have a GeometryData object which contains the references to the vertex array object, vertex buffer, and index buffer. It has a render method which draws the model using glDrawElements.
The GeometryData has a reference to a Shader object, which has a reference to the program shader handle.
The problem is that if I render consecutive objects that have the same shader instance, only the first one is rendered, and the others are not for some reason, with no errors. If I alternate shader assignments between consecutive object draws, all of them are drawn normally.
The Render method of my GeometryData is as follows:
public override void Render()
{
GL.BindVertexArray(vertexArray);
if (UseShader != null)
UseShader.Use();
var err = GL.GetError();
GL.DrawElements(PrimitiveType.Triangles, numIndices, DrawElementsType.UnsignedInt, IntPtr.Zero);
err = GL.GetError();
GL.BindVertexArray(0);
}
The UseShader.Use() method is this
public void Use()
{
int current = GL.GetInteger(GetPName.CurrentProgram);
if (current != ProgramHandle)
GL.UseProgram(ProgramHandle);
// Use a uniform block for the project and modelview matrices
var index = GL.GetUniformBlockIndex(ProgramHandle, "Matrices");
GL.UniformBlockBinding(ProgramHandle, index, World.BINDING_POINT);
// assume interleaved vertex data
GL.VertexAttribPointer(positionLoc, 3, VertexAttribPointerType.Float, false, 2 * Vector3.SizeInBytes, Vector3.SizeInBytes);
GL.EnableVertexAttribArray(positionLoc);
GL.VertexAttribPointer(colorLoc, 3, VertexAttribPointerType.Float, false, 2 * Vector3.SizeInBytes, 0);
GL.EnableVertexAttribArray(colorLoc);
}
Here are some screenshots. The boxes are created using this loop:
for (int i = -3; i <= 3; i+=3)
{
Transform child = new Transform();
child.Translation = new Vector3(i, 0, 0);
// this causes the second red box to not render
Geometry geom = new Geometry(i > 0 ? coloredShader : redShader, box);
// this causes all boxes to render normally
// Geometry geom = new Geometry(i == 0 ? coloredShader : redShader, box);
child.addChild(geom);
root.addChild(child);
}
Picture with alternating shaders (red, colored, red)
Picture with two consecutive objects using the same shader (red, red, colored). The second red object is not drawn. The only change I made to the scene is shader assignments.
Update
After further inspection and debugging, it seems the source of the problem is the uniform block. I'm using a uniform block to send the projection and modelview matrices to my shaders. I suspect that the second call to glBufferSubData to update the modelview matrix is not updating the uniform, or that the shader is not reading that value from the uniform without switching to another shader first.
I tried taking the modelview matrix out of the uniform block and sending it to each shader separately. That made all objects render correctly, but I still would like to use a uniform block for that to avoid redundant uploads.
Solved!
This was really silly. I needed to call glFlush() after updating the uniform block value using glBufferSubData.
public static void setModelView(ref Matrix4 modelView)
{
GL.BindBuffer(BufferTarget.UniformBuffer, uniformBuffer);
GL.BufferSubData(BufferTarget.UniformBuffer, (IntPtr)(16 * sizeof(float)), (IntPtr)(16 * sizeof(float)), ref modelView);
GL.Flush(); // I just added this line
GL.BindBuffer(BufferTarget.UniformBuffer, 0);
}

Unity Compute Shaders Vertex Index error

I have a compute shader and the C# script which goes with it used to modify an array of vertices on the y axis simple enough to be clear.
But despite the fact that it runs fine the shader seems to forget the first vertex of my shape (except when that shape is a closed volume?)
Here is the C# class :
Mesh m;
//public bool stopProcess = false; //Useless in this version of exemple
MeshCollider coll;
public ComputeShader csFile; //the compute shader file added the Unity way
Vector3[] arrayToProcess; //An array of vectors i'll use to store data
ComputeBuffer cbf; //the buffer CPU->GPU (An early version with exactly
//the same result had only this one)
ComputeBuffer cbfOut; //the Buffer GPU->CPU
int vertexLength;
void Awake() { //Assigning my stuff
coll = gameObject.GetComponent<MeshCollider>();
m = GetComponent<MeshFilter>().sharedMesh;
vertexLength = m.vertices.Length;
arrayToProcess = m.vertices; //setting the first version of the vertex array (copy of mesh)
}
void Start () {
cbf = new ComputeBuffer(vertexLength,32); //Buffer in
cbfOut = new ComputeBuffer(vertexLength,32); //Buffer out
csFile.SetBuffer(0,"Board",cbf);
csFile.SetBuffer(0,"BoardOut",cbfOut);
}
void Update () {
csFile.SetFloat("time",Time.time);
cbf.SetData(m.vertices);
csFile.Dispatch(0,vertexLength,vertexLength,1); //Dispatching (i think there is my mistake)
cbfOut.GetData(arrayToProcess); //getting back my processed vertices
m.vertices = arrayToProcess; //assigning them to the mesh
//coll.sharedMesh = m; //collider stuff useless in this demo
}
And my compute shader script :
#pragma kernel CSMain
RWStructuredBuffer<float3> Board : register(s[0]);
RWStructuredBuffer<float3> BoardOut : register(s[1]);
float time;
[numthreads(1,1,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
float valx = (sin((time*4)+Board[id.x].x));
float valz = (cos((time*2)+Board[id.x].z));
Board[id.x].y = (valx + valz)/5;
BoardOut[id.x] = Board[id.x];
}
At the beginning I was reading and writing from the same buffer, but as I had my issue I tried having separate buffers, but with no success. I still have the same problem.
Maybe I misunderstood the way compute shaders are supposed to be used (and I know I could use a vertex shader but I just want to try compute shaders for further improvements.)
To complete what I said, I suppose it is related with the way vertices are indexed in the Mesh.vertices Array.
I tried a LOT of different Blocks/Threads configuration but nothing seems to solve the issue combinations tried :
Block Thread
60,60,1 1,1,1
1,1,1 60,60,3
10,10,3 3,1,1
and some others I do not remember. I think the best configuration should be something with a good balance like :
Block : VertexCount,1,1 Thread : 3,1,1
About the closed volume: I'm not sure about that because with a Cube {8 Vertices} everything seems to move accordingly, but with a shape with an odd number of vertices, the first (or last did not checked that yet) seems to not be processed
I tried it with many different shapes but subdivided planes are the most obvious, one corner is always not moving.
EDIT :
After further study i found out that it is simply the compute shader which does not compute the last (not the first i checked) vertices of the mesh, it seems related to the buffer type, i still dont get why RWStructuredBuffer should be an issue or how badly i use it, is it reserved to streams? i cant understand the MSDN doc on this one.
EDIT : After resolution
The C# script :
using UnityEngine;
using System.Collections;
public class TreeObject : MonoBehaviour {
Mesh m;
public bool stopProcess = false;
MeshCollider coll;
public ComputeShader csFile;
Vector3[] arrayToProcess;
ComputeBuffer cbf;
ComputeBuffer cbfOut;
int vertexLength;
// Use this for initialization
void Awake() {
coll = gameObject.GetComponent<MeshCollider>();
m = GetComponent<MeshFilter>().mesh;
vertexLength = m.vertices.Length+3; //I add 3 because apparently
//vertexnumber is odd
//arrayToProcess = new Vector3[vertexLength];
arrayToProcess = m.vertices;
}
void Start () {
cbf = new ComputeBuffer(vertexLength,12);
cbfOut = new ComputeBuffer(vertexLength,12);
csFile.SetBuffer(0,"Board",cbf);
csFile.SetBuffer(0,"BoardOut",cbfOut);
}
// Update is called once per frame
void Update () {
csFile.SetFloat("time",Time.time);
cbf.SetData(m.vertices);
csFile.Dispatch(0,vertexLength,1,1);
cbfOut.GetData(arrayToProcess);
m.vertices = arrayToProcess;
coll.sharedMesh = m;
}
}
I had already rolled back to a
Blocks VCount,1,1
Before your answer because it was logic that i was using VCount*VCount so processing the vertices "square-more" times than needed.
To complete, you were absolutely right the Stride was obviously giving issues could you complete your answer with a link to doc about the stride parameter? (from anywhere because Unity docs are VOID and MSDN did not helped me to get why it should be 12 and not 32 (as i thought 32 was the size of a float3)
so Doc needed please
In the mean time i'll try to provide a flexible enough (generic?) version of this to make it stronger, and start adding some nice array processing functions in my shader...
I'm familiar with Compute Shaders but have never touched Unity, but having looked over the documentation for Compute Shaders in Unity a couple of things stand out.
The cbf and cbfOut ComputeBuffers are created with a stride of 32 (bytes?). Both your StructuredBuffers contain float3s which have a stride of 12 bytes, not 32. Where has 32 come from?
When you dispatch your compute shader you're requesting a two-dimensional dispatch (vertexLength,vertexLength, 1) but you're operating on a 1D array of float3s. You will end up with a race condition where many different threads think they're responsible for updating each element of the array. Although awful for performance, if you want a thread group size of [numthreads(1,1,1)] then you should dispatch (vertexLength, 1, 1) numbers of waves/wavefronts when calling Dispatch (ie, Dispatch (60,1,1) with numThreads(1,1,1)).
For best/better performance the number of threads in your thread group / wave should at least be a multiple of 64 for best efficiency on AMD hardware. You then need only dispatch ceil(numVertices/64) wavefronts and then simply insert some logic into the shader to ensure id.x is not out of bounds for any given thread.
EDIT:
The documentation for the ComputeBuffer constructor is here: Unity ComputeBuffer Documentation
While it doesn't explicitly say "stride" is in bytes, it's the only reasonable assumption.

Applying vtkWarpLens to Camera vs. Scene Data

Can anyone help me with vtkWarpLens?
What I am trying to do is implement a distortion pattern on the camera to modify how the data is seen.
Here is the meat of the code (it's in C#, I'm using Activis)
vtkPolyData pd = vtkPolyData.New();
CreateFromFile(pd); // this creates a triangle representation of a height field
double [] sr = pd.GetScalarRange();
vtkLookupTable lut = vtkLookupTable.New();
lut.SetNumberOfColors(16);
lut.SetHueRange(0.667, 0.0);
lut.Build();
wl = vtkWarpLens.New();
wl.SetInputConnection(pd.GetProducerPort());
wl.SetPrincipalPoint(0.5, 0.5);
wl.SetFormatWidth(1);
wl.SetFormatHeight(1);
wl.SetImageWidth(1000);
wl.SetImageHeight(1000);
wl.SetK1(0.01307);
wl.SetK2(0.0003102);
wl.SetP1(1.953e-005);
wl.SetP2(-9.655e-005);
vtkDataSetMapper dsmDistorted = vtkDataSetMapper.New();
dsmDistorted.SetInputConnection(wl.GetOutputPort());
dsmDistorted.SetLookupTable(lut);
dsmDistorted.SetScalarRange(sr[0]+20, sr[1]);
vtkActor dsDistortedActor = vtkActor.New();
dsDistortedActor.SetMapper(dsmDistorted);
m_renDistorted.AddActor(dsDistortedActor);
m_renWin.SetDesiredUpdateRate(0);
m_renWin.Render();
m_renDistorted.ResetCamera();
So, basically, I am creating a terrain representation using polygons, passing it through the warplens, passing that through a dataset mapper to give it pretty colors, then displaying it.
The issue is that the "warp" appears to be static to the terrain, not to the camera. I'm pretty new to VTK, so it's possible that I don't understand how the Interactor and the Camera are related.
Can someone help?

Texture appears grey when rendered

I'm currently working my way through "Beginning C# Programming", and have hit a problem in chapter 7 when drawing textures.
I have used the same code as on the demo CD, and although I had to change the path of the texture to be absolute, when rendered it is appearing grey.
I have debugged the program to write to file the loaded texture, and this is fine - no problems there. So something after that point is going wrong.
Here are some snippets of code:
public void InitializeGraphics()
{
// set up the parameters
Direct3D.PresentParameters p = new Direct3D.PresentParameters();
p.SwapEffect = Direct3D.SwapEffect.Discard;
...
graphics = new Direct3D.Device( 0, Direct3D.DeviceType.Hardware, this,
Direct3D.CreateFlags.SoftwareVertexProcessing, p );
...
// set up various drawing options
graphics.RenderState.CullMode = Direct3D.Cull.None;
graphics.RenderState.AlphaBlendEnable = true;
graphics.RenderState.AlphaBlendOperation = Direct3D.BlendOperation.Add;
graphics.RenderState.DestinationBlend = Direct3D.Blend.InvSourceAlpha;
graphics.RenderState.SourceBlend = Direct3D.Blend.SourceAlpha;
...
}
public void InitializeGeometry()
{
...
texture = Direct3D.TextureLoader.FromFile(
graphics, "E:\\Programming\\SharpDevelop_Projects\\AdvancedFrameworkv2\\texture.jpg", 0, 0, 0, 0, Direct3D.Format.Unknown,
Direct3D.Pool.Managed, Direct3D.Filter.Linear,
Direct3D.Filter.Linear, 0 );
...
}
protected virtual void Render()
{
graphics.Clear( Direct3D.ClearFlags.Target, Color.White , 1.0f, 0 );
graphics.BeginScene();
// set the texture
graphics.SetTexture( 0, texture );
// set the vertex format
graphics.VertexFormat = Direct3D.CustomVertex.TransformedTextured.Format;
// draw the triangles
graphics.DrawUserPrimitives( Direct3D.PrimitiveType.TriangleStrip, 2, vertexes );
graphics.EndScene();
graphics.Present();
...
}
I can't figure out what is going wrong here. Obviously if I load up the texture in windows it displays fine - so there's something not right in the code examples given in the book. It doesn't actually work, and there must be something wrong with my environment presumably.
You're using a REALLY old technology there... I'm guessing you're trying to make a game (as we all did when we started out!), try using XNA. My best guess is that it's your graphics driver. I know that sounds like a cop out, but seriously, I've seen this before and once I swapped out my old graphics card for a new one it worked! I'm not saying it's broken, or that it's impossible to get it to work. But my best two suggestions would be to:
1) Start using XNA and use the tutorials on http://www.xnadevelopment.com/tutorials.shtml
2) Replace your graphics card (if you want to carry on with what you are doing now).

Categories

Resources