XNA 4.0 draw a cube with DrawUserIndexedPrimitives method - c#

EDIT
Since I read what Mark H suggested (thanks a lot, I found it very useful) I think my question can become clearer structured this way:
Using XNA 4.0, I'm trying to draw a cube.
Im using this method:
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionColor>(
PrimitiveType.LineList,
primitiveList,
0, // vertex buffer offset to add to each element of the index buffer
8, // number of vertices in pointList
lineListIndices, // the index buffer
0, // first index element to read
7 // number of primitives to draw
);
I got the code sample from this page which simply draw a serie of triangles.
I want to modify this code in order to draw a cube. I was able to slitghly move the camera so I can have the perception of solidity, I set the vertex array to contain the 8 points defining a cube. But I can't fully understand how many primitives I have to draw (last parameter) for each of PrimitiveType.
So, I wasn't able to draw the cube (just some of the edges in a non-defined order).
More in detail:
to build the vertex index list, the sample used
// Initialize an array of indices of type short.
lineListIndices = new short[(points * 2) - 2];
// Populate the array with references to indices in the vertex buffer
for (int i = 0; i < points - 1; i++)
{
lineListIndices[i * 2] = (short)(i);
lineListIndices[(i * 2) + 1] = (short)(i + 1);
}
I'm ashamed to say I cannot do the same in the case of a cube.
what has to be the size of the lineListIndices?
how should I populate it? In which order?
And how do these things change when I use a different PrimitiveType?
In the code sample there are also another couple of calls which I cannot fully understand, which are:
// Initialize the vertex buffer, allocating memory for each vertex.
vertexBuffer = new VertexBuffer(graphics.GraphicsDevice, vertexDeclaration,
points, BufferUsage.None);
// Set the vertex buffer data to the array of vertices.
vertexBuffer.SetData<VertexPositionColor>(pointList);
and
vertexDeclaration = new VertexDeclaration(new VertexElement[]
{
new VertexElement(0, VertexElementFormat.Vector3, VertexElementUsage.Position, 0),
new VertexElement(12, VertexElementFormat.Color, VertexElementUsage.Color, 0)
}
);
that is, for VertexBuffer and VertexDeclaration I could not find and significant (monkey-like) guide. I reported them too because I think they could be involded in understanding things.
I think I also have to understand something related to the order the vertexes are stored in the array. But actually I have no clue of what I should learn to have this function drawing a cube. So, if anybody could point me to the right direction, it wil be appreciated.
Hope to have made myself clear this time

Eventually I was able to find a worthful guide to draw a cube. In there they use DrawUserPrimitive<T> method but the difference is quite a thin one. I had no difficulties in using both methods. As you will see, this guide includes explainations on every concern of drawing a cube, like the order vertexes have to be stored and then drawn, and number of primitives to pass the drawing method.
Hope this could be of help to anyone who will be struggling with this problem such as I was.

Related

Delete point in trail Renderer

I was wondering if there would be a way to delete the last couple of positions/indexes of a trail renderer. I am trying to stop the trail render emission after a bullet collision, but the trail renderer always spawns a couple of indexes too much, and I was wondering if there was a way to delete those indexes.
I have tried setting the positions of newer indexes to equal older indexes, and that worked to some extent but not to the extent that I wanted.
tR = GetComponent<TrailRenderer>();
int positions = tR.positionCount;
for (int i = 0; i < vertsToDelete; i++) {
if (positions - 1 - i - (int)vertsToDelete > 0) {
tR.SetPosition(positions - 1 - i, tR.GetPosition(positions - 1 - i - (int)vertsToDelete));
}
}
This code works mostly except for certain instances when the positions screw up. Thats why i think being able to delete an index would make the process much easier.
It is more efficient to use one single GetPositions call, manipulate the array and write it back completely using one single AddPositions call.
In order to alternate the array you could go through a list which allows dynamically changing the amount of elements more easily.
Start by converting the array to a list using Linq IEnumerable.ToList()
var positions = new Vector3[tr.positionCount];
tr.GetPositions(positions);
var positionsList = positions.ToList();
then remove an element by index using List<T>.RemoveAt(int index)
positionsList.RemoveAt(indexToRemove);
or remove multiple sequential elements using List<T>.RemoveRange(int startIndex, int amount)
positionsList.RemoveRange(startIndex, amountToRemove);
And finally convert it back to the required arrays using List<T>.ToArray()
tR.Clear();
tR.AddPositions(positionsList.ToArray());
For anyone still searching for this.
I had an issue just like that, my Trail Renderer was drawing some artifact when I used the Floating Origin on it (even when I turned emission off, it was like it had a delay on turning off).
After 2 weeks trying many things and searching I solved this in the most ridiculous way:
//I have this at the beginning os my class:
//public TrailRenderer trailRenderer;
//and this declared at start:
//trailRenderer = trailRenderer.GetComponent<TrailRenderer>();
int trailSize = trailRenderer.positionCount;
float distance = Vector3.Distance(trailRenderer.GetPosition(trailSize - 1), (trailRenderer.GetPosition(trailSize - 2)));
Debug.Log("Distance: " + distance);
if (distance > 90f)
{
trailRenderer.SetPosition(trailSize - 1, trailRenderer.transform.position);
}
This verify the wrong Draw (in my case is just the last point) and get it back to my trail renderer transform, preventing any artifacts my floating origin may cause if it causes any. That's way better than use a for to verify every position since I only have an issue with the last one. Just adapt that to your needs :)

Draw segments of circle on xna

How do I draw a circle sector (as in a slice of pizza shape) in xna?
I'd like to use one as a timer indicator, so would like to be able change its angle dynamically.
In an ideal world I'm looking for something like this:
Drawsector (float startAngle, float endAngle, ... )
Does such a thing exist?
And if it does - how would I go about drawing a more graphically involved one (as opposed to just block colour)
No. XNA only provides an API for drawing primitive elements called surprisingly primitives.
All is not lost because drawing a circle can be viewed as simply drawing a series of very short interconnected line segments, small enough that you can't tell they are lines, but not too small so as to be inefficient.
In XNA you would draw a PrimitiveType.LineStrip.
MSDN:
The data is ordered as a sequence of line segments; each line segment is described by one new vertex and the last vertex from the previous line seqment. The count may be any positive integer.
e.g. (from MSDN)
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionColor>(
PrimitiveType.LineStrip,
primitiveList,
0, // vertex buffer offset to add to each element of the index buffer
8, // number of vertices to draw
lineStripIndices,
0, // first index element to read
7 // number of primitives to draw
);
You would need to create your own function to determine the vertices that match the arc you want to draw. You should save that into a permanent index and vertice buffer rather than performing a DrawSector() all the time in your game loop.
Tell me more
Drawing 3D Primitives using Lists or Strips

How to avoid shimmering with rotated spriteBatch?

I am writing a camera module to my XNA project and I encountered a problem lately.
I allowed camera rotation to my module and I got a fabulous bug - every time I draw multiple sprites in the same position, spriteBatch once draws one in front, second time the second one in front, and what is even funnier, sometimes two sprites are shown, with different alphas.
I've made few experiments:
When I set SpriteBatch mode to Deffered it is all ok - but I want to have access to z-index.
When I draw a whole 500x500 tiles array (all the sprites loaded), it is all ok, but when I get like 50x50 square from the array (containing the whole desired screen content) it gets bugged.
Finally different states are always happening for the same angle.
I must add that I do translation myself - in order to get double precision. Here is the method for translations:
public void DrawSprite(Sprite toDraw)
{
// TODO: Think about converting constructor
Vector2D drawingPostion;
Vector2 drawingPos;
drawingPostion = toDraw.Position - Position;
drawingPos.X = (float) drawingPostion.X * GraphicsManager.Instance.UnitToPixels;
drawingPos.Y = (float) drawingPostion.Y * GraphicsManager.Instance.UnitToPixels;
spriteBatch.Draw(toDraw.Texture, drawingPos, toDraw.Source, toDraw.Color,
toDraw.Rotation, toDraw.Origin, toDraw.Scale, toDraw.Effects, toDraw.LayerDepth);
}
My ideas for the problem:
Fix the bug somehow (if it's possible)
Force XNA to sort first by z-index and then by drawing order.
Apply some z-indexes everywhere, where overlapping can occur (don't like this)
Abandon z-index (don't want that either)
As lukegravitt suggested, you should use SpriteSortMode.FrontToBack:
Sprites are sorted by depth in front-to-back order prior to drawing.
This procedure is recommended when drawing opaque sprites of varying
depths.
Reference MSDN.
So you can easily set the z-index with the last parameter of SpriteBatch.Draw:
float layerDepth
The depth of a layer. By default, 0 represents the front layer and 1
represents a back layer. Use SpriteSortMode if you want sprites to be
sorted during drawing.
Reference MSDN.
I've finally picked the option C), here is how it works in my code:
public void DrawSprite(Sprite toDraw)
{
// TODO: Think about converting constructor
Vector2D drawingPostion;
Vector2 drawingPos;
drawingPostion = toDraw.Position - Position;
drawingPos.X = (float) drawingPostion.X * GraphicsManager.Instance.UnitToPixels;
drawingPos.Y = (float) drawingPostion.Y * GraphicsManager.Instance.UnitToPixels;
// proceeding to new z-index
zsortingValue += 0.000001f;
spriteBatch.Draw(toDraw.Texture, drawingPos, toDraw.Source, toDraw.Color,
toDraw.Rotation, toDraw.Origin, toDraw.Scale, toDraw.Effects, toDraw.LayerDepth + zsortingValue);
}
where zsortingValue is zeroed whenever a frame begins. This way each sprite can have its own sorting value, which is being enhanced only.

How do I draw a TriangleStrip / Filling it in XNA?

I'm trying to wrap my head around:
http://msdn.microsoft.com/en-us/library/bb196409.aspx
And the reference isn't much to go on. It's short, vague and nothing that you can learn from.
I want to create a method that that takes a list of Triangle = (A class of 3 Vectors), and render it, and later be able to fill it with a color or a texture.
Can someone explain the above mentioned method? Because what I'm trying simply isn't working. I've tried adding one triangle. My understanding below, please correct me where I'm wrong.
Method when creating "One Triangle":
GraphicsDevice.DrawUserIndexedPrimitives<VertexPositionColor>
(
PrimitiveType.TriangleStrip,
"array of 3 VertexPositionColor?",
0,
(3? 9?),
"I have no clue what to put here and why I should put it here?",
"What do I put here?",
"If 1 triangle, value should be = 1? 1 Triangle = 1 Primitive"
);
What do I need to make this work? Depending on how many Triangles I pass on to my methods, do I render and what values do change depending on how many Triangles there are?
...and if successful (hopefully sometime) how do I fill it?
Please, no vague short answers because the reference does that very very well.
One clarification to your way of thinking before we begin. In XNA - you draw a wireframe (outline) triangle, or a filled triangle or a textured triangle. There isn't anything lile "draw now" and "fill later". You can only draw something else on top of what's already in the framebuffer.
Also here is some background on what an indexed mesh is. This is the data fed into DrawUserIndexedPrimitives (vertices and triangles composed of indices into the sett of vertices).
Given that, here's how the draw call works
_effect.Texture = texture; // This sets the texture up so the
// shaders associated with this effect can access it
// The color in each vertex is modulated with the texture color
// and linearly interpolated across vertices
_effect.VertexColorEnabled = true;
foreach (var pass in _effect.CurrentTechnique.Passes)
{
pass.Apply(); // This sets up the shaders and their state
// TriangleList means that the indices are understood to be
// multiples of 3, where the 3 vertices pointed to are comprise
// one triangle
_device.DrawUserIndexedPrimitives(PrimitiveType.TriangleList,
// The vertices. Note that there can be any number of vertices in here.
// What's important is the indices array (and the vertexOffset, primitivecount, vertexCount) that determine
// how many of the provided vertices will actually matter for this draw call
_vertices,
// The offset to the first vertex that the index 0 in the index array will refer to
// This is used to render a "part" of a bigger set of vertices, perhaps shared across
// different objects
0,
// The number of vertices to pick starting from vertexOffset. If the index array
// tried to index a vertex out of this range then the draw call will fail.
_vertices.Count,
// The indices (count = multiple of 3) that comprise separate triangle (because we said TriangleList -
// the rules are different depending on the primitive type)
_indices,
// Again, an offset inside the indices array so a part of a larger index array can be used
0,
// Number of indices. This HAS to be a multiple of 3 because we said we're rendering
// a list of triangles (TrangleList).
kvp.Value.Indices.Count / 3);
}
I hope that is clear. Do let me know if you have any specific questions about each of the parameters and/or concepts and I can edit this post to clarify those points.
Hope this helps!

Is it possible to use a shader to find the "difference" between two textures? (XNA/HLSL)

I have made a simple webcam based application that detects the "edges of motion" so draws a texture that shows where the pixels of the current frame are significantly different to the previous frame. This is my code:
// LastTexture is a Texture2D of the previous frame.
// CurrentTexture is a Texture2D of the current frame.
// DifferenceTexture is another Texture2D.
// Variance is an int, default 100;
Color[] differenceData = new Color[CurrentTexture.Width * CurrentTexture.Height];
Color[] currentData = new Color[CurrentTexture.Width * CurrentTexture.Height];
Color[] lastData = new Color[LastTexture.Width * LastTexture.Height];
CurrentTexture.GetData<Color>(currentData);
LastTexture.GetData<Color>(lastData);
for (int i = 0; i < currentData.Length; i++)
{
int sumCD = ColorSum(currentData[i]); // ColorSum is the same as c.R + c.B + c.G where c is a Color.
int sumLD = ColorSum(lastData[i]);
if ((sumCD > sumLD - Variance) && (sumCD < sumLD + Variance))
differenceData[i] = new Color(0, 0, 0, 0); // If the current pixel is within the range of +/- the Variance (default: 100) variable, it has not significantly changes so is drawn black.
else
differenceData[i] = new Color(0, (byte)Math.Abs(sumCD - sumLD), 0); // This has changed significantly so is drawn a shade of green.
}
DifferenceTexture = new Texture2D(game1.GraphicsDevice, CurrentTexture.Width, CurrentTexture.Height);
DifferenceTexture.SetData<Color>(differenceData);
LastTexture = new Texture2D(game1.GraphicsDevice,CurrentTexture.Width, CurrentTexture.Height);
LastTexture.SetData<Color>(currentData);
Is there a way to offload this calculation to the GPU using shaders (it can go at about 25/26 fps using the above method, but this is a bit slow)? I have a basic understanding of how HLSL shaders work and don't expect a full solution, I just want to know if this would be possible and how to get the "difference" texture data from the shader and if this would actually be any faster.
Thanks in advance.
You could sample two textures inside the pixel shader, then write the difference out as the colour value. If you set up a Render Target, the colour information you ouput from the shader will be stored in this texture instead of the framebuffer.
I don't know what sort of speed gain you'd expect to see, but that's how I'd do it.
*edit - Oh and I forgot to say, be aware of the sampling type you use, as it will affect the results. If you want your algorithm to directly translate to the GPU, use point sampling to start with.
Regarding your comment above about deciding to use a dual thread approach to your problem, check out the .Net Parallel Extensions CTP from Microsoft. microsoft.com
If you're not planning on deploying to an XBox360, this library works great with XNA, and I've seen massive speed improvements in certain loops and iterations.
You would basically only have to change a couple lines of code, for example:
for (int i = 0; i < currentData.Length; i++)
{
// ...
}
would change to:
Parallel.For(0, currentData.Length, delegate(int i)
{
// ...
});
to automatically make each core in your processor help out with the number crunching. It's fast and excellent.
Good luck!
The Sobel operator or something like it is used in game engines and other real-time applications for edge detection. It is trivial to write as a pixel shader.

Categories

Resources