Can Short2 be used on WP7 for vertex positions? - c#

I'm having trouble using Short2 for the (x,y) positions in my vertex data. This is my vertex structure:
struct VertexPositionShort : IVertexType
{
private static VertexElement[]
vertexElements = new VertexElement[]
{
new VertexElement(0, VertexElementFormat.Short2, VertexElementUsage.Position, 0),
};
private static VertexDeclaration
vertexDeclaration = new VertexDeclaration(vertexElements);
public Short2
Position;
public static VertexDeclaration Declaration
{
get { return new VertexDeclaration(vertexElements); }
}
VertexDeclaration IVertexType.VertexDeclaration
{
get { return new VertexDeclaration(vertexElements); }
}
}
Using the WP7 emulator, nothing is drawn if I use this structure - no artifacts, nothing! However, if I use an identical structure where the Short2 structs are replaced by Vector2 then it all works perfectly.
I've found a reference to this being an emulator-specific issue: "In the Windows Phone Emulator, the SkinnedEffect bone index channel must be specified as one of the integer vertex element formats - either Byte4, Short2, or Short4. This same set of integer data formats cannot be used for other shader input channels such as colors, positions, and texture coordinates on the emulator." (http://www.softpedia.com/progChangelog/Windows-Phone-Developer-Tools-Changelog-154611.html) However this is from July 2010 and I'd have assumed this limitation has been fixed by now...? Unfortunately I don't have a device to test on.
Can anyone confirm that this is still an issue in the emulator or point me at another reason why this is not working?

Solved, by Mr Shawn Hargreaves: "You can use Short2 in vertex data, but this is an integer type, so your vertex shader must be written to accept integer rather than float inputs. BasicEffect takes floats, so Short2 will not work with it. NormalizedShort2 might be a better choice?"
http://blogs.msdn.com/b/shawnhar/archive/2010/11/19/compressed-vertex-data.aspx
I can confirm that NormalizedShort2 does in fact work for position data, in both the WP7 emulator and on real devices.
Thanks, Shawn!

Related

Access TextMesh Pro Texture Tiling and Offset how?

TextMesh Pro shaders have two unusual facilities for adjusting the Textures used for both the Face and the Outline: Tiling and Offset.
They're not accessible via the normal ways of using keywords to access shader properties in Unity.
Is it possible to access these properties from Monobehaviours? How?
If you're wanting to see sample code... there's little point... as I've tried all the normal ways of accessing shader properties in Unity and found none of them work, all throwing errors relating to symbols not existing. Or returning nulls.
These properties are somewhat nested, somehow.
If you've ever successfully edited these values with a Script in Unity, you'll likely know that they're a bit different.
Within the Shader files for TextMesh Pro, these values are known as:
float4 _FaceTex_ST;
float4 _OutlineTex_ST;
Note, the .x and .y of these float4 are the scaling/tiling along their respective axis, whilst .z and .w are used for their offsets.
Depending a bit on which shader exactly you use - for now assuming one of the built-in ones like e.g. TextMeshPro/Distance Field (Surface) you can search for the shader e.g. in Assets/TextMeshPro/Shaders, select the Shader and now see which properties are exposed in the Inspector.
In that case it would be the _FaceTex texture.
Now the Tiling and Offset are indeed quite tricky since they store those directly along with the Texture property itself! You can see his when setting the Inspector to Debug mode with the TextMeshPro selected
=> You want to use Material.SetTextureOffset and Material.SetTextureScale (= Tiling) with the property name of the texture itself
yourTextMeshPro.fontMaterial.SetTextureScale("_FaceTex", new Vector2(42, 42));
yourTextMeshPro.fontMaterial.SetTextureOffset("_FaceTex", new Vector2(123, 456));
The Tiling and Offset have no effect for the Outline apparently. See e.g. Distance Field (Surface) Shader.
Outline
...
Tiling: ????
Offset: ????
You could still try though and do the same just with _OutlineTex
Thanks to the incomparable derHugo, the resultant code works perfectly, in both directions:
using TMPro;
using UnityEngine;
public class testMaterailProps : MonoBehaviour {
public Vector2 FaceTiling;
public Vector2 FaceOffset;
public Vector2 OutLineTiling;
public Vector2 OutlineOffset;
public Material myFontMaterial;
TextMeshPro myTexmMeshPro;
static readonly int FaceTex = Shader.PropertyToID( "_FaceTex" );
static readonly int OutlineTex = Shader.PropertyToID( "_OutlineTex" );
void Start ( ) {
myTexmMeshPro = GetComponent<TextMeshPro>( );
myFontMaterial = myTexmMeshPro.fontSharedMaterial;
FaceTiling = myFontMaterial.GetTextureScale( FaceTex );
FaceOffset = myFontMaterial.GetTextureOffset( FaceTex );
OutLineTiling = myFontMaterial.GetTextureScale( OutlineTex );
OutlineOffset = myFontMaterial.GetTextureOffset( OutlineTex );
}
void OnValidate ( ) {
myFontMaterial.SetTextureScale( FaceTex, FaceTiling );
myFontMaterial.SetTextureOffset( FaceTex, FaceOffset );
myFontMaterial.SetTextureScale( OutlineTex, OutLineTiling );
myFontMaterial.SetTextureOffset( OutlineTex, OutlineOffset );
}
}
Making it possible to copy text styles accurately from one text object to another, since a bug in the copy/paste of Unity properties prevents these values being copy-able through the Editor UI...

WorldToScreenPoint but still different positions

So I have started using WorldToScreenPoint but the thing is the object is not the same on different screen sizes :(
heres my codes
public virtual void Move()
{
Vector2 buttonFirst = thisCam.WorldToScreenPoint(gameButtons[0].transform.position);
buttonFirst.x = 316.5f; //242
buttonFirst.y = 111f;
gameButtons[0].transform.position = buttonFirst;
}
heres the output
the object is not the same on different screen sizes
Of course, different size so different position.
If you need the return position to be invariable, use WorldToViewportPoint, it always returns a value between (0,0) to (1,1).
as I Understood you need to use ScreenToWorldPoint!
https://docs.unity3d.com/ScriptReference/Camera.ScreenToWorldPoint.html
check this

How exactly do we bind an attribute name to a location in OpenGL?

I am using OpenTK, a wrapper for .NET. The version of OpenGL used is 4.5 implemented by NVIDIA. I am using Windows 10 Pro.
Issue
My issue is simple. I want to address the vertex attributes by their names, instead of hard coding their location in shader source.
I have a vertex shader called basicTexturedVert.glsl
#version 450 core
in vec4 position;
in vec2 normal;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
out vec2 vs_uv;
void main(void)
{
vs_uv = normal;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * position;
}
Things I have tried
Now to do that, normally I would have to do a GL.GetAttribLocation with the name of the attribute in the program and it would return the location of it. Well I tried everything but it only returns the location of the in vec4 position and not of in vec2 normal. And by everything I mean:
When I hard code the location of both attributes, GL.GetAttribLocation("position") always returns the correct location, but the same for normal returns a -1.
I thought it had to do with the name of normal, maybe it is a reserved word by OpenGL, so I changed it to a random word like abcdef still gives same result.
Now I am thinking maybe it has to do with the order of declaration of the shader attributes in the shader source, so I move normal before position, still same results.
About now I am going insane trying to figure why OpenGL is always giving the right location for position. I thought maybe vec2 (which here is the only differentiator between the two) is not an accepted type, I check online, damn well it is accepted.
As you can see I tried many things before trying this one. I read that you can programmatically bind the attributes to names and assign a location to choose. So that is what I do here in the following code.
First I create my Shader objects like this:
var basicTexturedVertexShader = new Shader("Basic Textured Vertex Shader",
ShaderType.VertexShader,
File.ReadAllText(#"Components\Shaders\Vertex\basicTexturedVert.glsl"),
new[] { "position", "normal" }
);
var basicTexturedFragmentShader = new Shader("Basic Textured Fragment Shader",
ShaderType.FragmentShader,
File.ReadAllText(#"Components\Shaders\Fragment\basicTexturedFrag.glsl")
);
As you can see, each shader gets assigned:
- A name so I can understand which shader I am working on (during debug)
- The type of the shader (VertexShader or FragmentShader)
- The shader source code
- And optionally an array containing the names of the shader attributes like for the first one new[] { "position", "normal" } which will be assigned to a location during program linking
I then create a program and link them to it:
_texturedProgram = new ShaderProgram(basicTexturedVertexShader, basicTexturedFragmentShader);
_texturedProgram.Link();
Now inside the _texturedProgram.Link:
int location = 0; // This is a location index that starts from 0 then goes up
foreach (var shader in _shaders) {
DebugUtil.Info($"Attaching shader {shader.Name} of handle {shader.Handle} of type {shader.Type} to program {_handle}");
GL.AttachShader(_handle, shader.Handle);
// If the shader we attached has attribute names with it
// It means we need to give them a location
if (shader.AttributeNames != null)
{
foreach (var shaderAttributeName in shader.AttributeNames)
{
_attributeLocation[shaderAttributeName] = location;
GL.BindAttribLocation(_handle, location, shaderAttributeName);
// We check if anything wrong happened and output it
ErrorCode error;
bool errorHappened = false;
while ((error = GL.GetError()) != ErrorCode.NoError) {
DebugUtil.Warning($"Problem during binding attrib location of {shaderAttributeName} of {shader.Name} to {location} in program {_handle}. Error: {error}");
errorHappened = true;
}
if (!errorHappened) {
DebugUtil.Info($"Shader attribute \"{shaderAttributeName}\" of {shader.Name} of program {Handle} SHOULD HAVE BEEN bound to location {location}");
}
location++;
}
}
}
// We link the program
GL.LinkProgram(_handle);
// Make sure the linking happened with no problem
var info = GL.GetProgramInfoLog(_handle);
if (!string.IsNullOrWhiteSpace(info)) {
DebugUtil.Warning($"Info log during linking of shaders to program {_handle}: {info}");
}
else {
DebugUtil.Info($"Program {_handle} linked successfully");
}
// We compare the locations we think have been assigned to the vertex attributes
// to the one that are actually stored in OpenGL
foreach (var attribute in _attributeLocation) {
DebugUtil.Info($"[Program:{_handle}] According to OpenGL, {attribute.Key} is located in {GL.GetAttribLocation(_handle, attribute.Key)} when it is supposed to be in {attribute.Value}");
}
// We clean up :)
foreach (var shader in _shaders) {
GL.DetachShader(_handle, shader.Handle);
GL.DeleteShader(shader.Handle);
}
// No need for the shaders anymore
_shaders.Clear();
And here is the console output:
Lets say that position's default position would have been 0 and it just a coincidence. Let's set location starting index at like 5.
As you can see, my code works for position but not for normal...
It appears that, because the normal vertex attribute leads to no use in the subsequent stage (= fragment shader), the OpenGL optimizes the shader program by getting rid of unused variables.
Thanks to #Ripi2 for pointing that out.

XNA Hardware Instancing: Mesh not rendered completely

I have implemented basic Hardware model instancing method in XNA code by following this short tutorial:
http://www.float4x4.net/index.php/2011/07/hardware-instancing-for-pc-in-xna-4-with-textures/
I have created the needed shader (without texture atlas though, single texture only) and I am trying to use this method to draw a simple tree I generated using 3DS Max 2013 and exported via FBX format.
The results I'm seeing left me without clue as to what is going on.
Back when I was using no instancing methods, but simply calling Draw on a mesh (for every tree on a level), the whole tree was shown:
I have made absolutely sure that the Model contains only one Mesh and that Mesh contains only one MeshPart.
I am using Vertex Extraction method, by using Model's Vertex and Index Buffer "GetData<>()" method, and correct number of vertices and indices, hence, correct number of primitives is rendered. Correct texture coordinates and Normals for lighting are also extracted, as is visible by the part of the tree that is being rendered.
Also the parts of the tree are also in their correct places as well.
They are simply missing some 1000 or so polygons for absolutely no reason what so ever. I have break-pointed at every step of vertex extraction and shader's parameter generation, and I cannot for the life of me figure out what am I doing wrong.
My Shader's Vertex Transformation function:
VertexShaderOutput VertexShaderFunction2(VertexShaderInput IN, float4x4 instanceTransform : TEXCOORD1)
{
VertexShaderOutput output;
float4 worldPosition = mul(IN.Position, transpose(instanceTransform));
float4 viewPosition = mul(worldPosition, View);
output.Position = mul(viewPosition, Projection);
output.texCoord = IN.texCoord;
output.Normal = IN.Normal;
return output;
}
Vertex bindings and index buffer generation:
instanceBuffer = new VertexBuffer(Game1.graphics.GraphicsDevice, Core.VertexData.InstanceVertex.vertexDeclaration, counter, BufferUsage.WriteOnly);
instanceVertices = new Core.VertexData.InstanceVertex[counter];
for (int i = 0; i < counter; i++)
{
instanceVertices[i] = new Core.VertexData.InstanceVertex(locations[i]);
}
instanceBuffer.SetData(instanceVertices);
bufferBinding[0] = new VertexBufferBinding(vBuffer, 0, 0);
bufferBinding[1] = new VertexBufferBinding(instanceBuffer, 0, 1);
Vertex extraction method used to get all vertex info (this part I'm sure works correctly as I have used it before to load test geometric shapes into levels, like boxes, spheres, etc for testing various shaders, and constructing bounding boxes around them using extracted vertex data, and it is all correct):
public void getVertexData(ModelMeshPart part)
{
modelVertices = new VertexPositionNormalTexture[part.NumVertices];
rawData = new Vector3[modelVertices.Length];
modelIndices32 = new uint[rawData.Length];
modelIndices16 = new ushort[rawData.Length];
int stride = part.VertexBuffer.VertexDeclaration.VertexStride;
VertexPositionNormalTexture[] vertexData = new VertexPositionNormalTexture[part.NumVertices];
part.VertexBuffer.GetData(part.VertexOffset * stride, vertexData, 0, part.NumVertices, stride);
if (part.IndexBuffer.IndexElementSize == IndexElementSize.ThirtyTwoBits)
part.IndexBuffer.GetData<uint>(modelIndices32);
if (part.IndexBuffer.IndexElementSize == IndexElementSize.SixteenBits)
part.IndexBuffer.GetData<ushort>(modelIndices16);
for (int i = 0; i < modelVertices.Length; i++)
{
rawData[i] = vertexData[i].Position;
modelVertices[i].Position = rawData[i];
modelVertices[i].TextureCoordinate = vertexData[i].TextureCoordinate;
modelVertices[i].Normal = vertexData[i].Normal;
counter++;
}
}
This is the rendering code for the object batch (trees in this particular case):
public void RenderHW()
{
Game1.graphics.GraphicsDevice.RasterizerState = rState;
treeBatchShader.CurrentTechnique.Passes[0].Apply();
Game1.graphics.GraphicsDevice.SetVertexBuffers(bufferBinding);
Game1.graphics.GraphicsDevice.Indices = iBuffer;
Game1.graphics.GraphicsDevice.DrawInstancedPrimitives(PrimitiveType.TriangleList, 0, 0, treeMesh.Length, 0, primitive , counter);
Game1.graphics.GraphicsDevice.RasterizerState = rState2;
}
If anybody has any idea where to even start looking for errors, just post all ideas that come to mind, as I'm completely stumped as to what's going on.
This even counters all my previous experience where I'd mess something up in shader code or vertex generation, you'd get some absolute mess on your screen - numerous graphical artifacts such as elongated triangles originating where mesh should be, but one tip stretching back to (0,0,0), black texture, incorrect positioning (often outside skybox or below terrain), incorrect scaling...
This is something different, almost as if it works - the part of the tree that is visible is correct in every single aspect (location, rotation, scale, texture, shading), except that a part is missing. What makes it weirder for me is that the part missing is seemingly logically segmented: Only tree trunk's primitives, and some leaves off the lowest branches of the tree are missing, leaving all other primitives correctly rendered with no artifacts. Basically, they're... correctly missing.
Solved. Of course it was the one part I was 100% sure it was correct while it was not.
modelIndices32 = new uint[rawData.Length];
modelIndices16 = new ushort[rawData.Length];
Change that into:
modelIndices32 = new uint[part.IndexBuffer.IndexCount];
modelIndices16 = new ushort[part.IndexBuffer.IndexCount];
Now I have to just figure out why are 3 draw calls rendering 300 trees slower than 300 draw calls rendering 1 tree each (i.e. why did I waste entire afternoon creating a new problem).

Unity Compute Shaders Vertex Index error

I have a compute shader and the C# script which goes with it used to modify an array of vertices on the y axis simple enough to be clear.
But despite the fact that it runs fine the shader seems to forget the first vertex of my shape (except when that shape is a closed volume?)
Here is the C# class :
Mesh m;
//public bool stopProcess = false; //Useless in this version of exemple
MeshCollider coll;
public ComputeShader csFile; //the compute shader file added the Unity way
Vector3[] arrayToProcess; //An array of vectors i'll use to store data
ComputeBuffer cbf; //the buffer CPU->GPU (An early version with exactly
//the same result had only this one)
ComputeBuffer cbfOut; //the Buffer GPU->CPU
int vertexLength;
void Awake() { //Assigning my stuff
coll = gameObject.GetComponent<MeshCollider>();
m = GetComponent<MeshFilter>().sharedMesh;
vertexLength = m.vertices.Length;
arrayToProcess = m.vertices; //setting the first version of the vertex array (copy of mesh)
}
void Start () {
cbf = new ComputeBuffer(vertexLength,32); //Buffer in
cbfOut = new ComputeBuffer(vertexLength,32); //Buffer out
csFile.SetBuffer(0,"Board",cbf);
csFile.SetBuffer(0,"BoardOut",cbfOut);
}
void Update () {
csFile.SetFloat("time",Time.time);
cbf.SetData(m.vertices);
csFile.Dispatch(0,vertexLength,vertexLength,1); //Dispatching (i think there is my mistake)
cbfOut.GetData(arrayToProcess); //getting back my processed vertices
m.vertices = arrayToProcess; //assigning them to the mesh
//coll.sharedMesh = m; //collider stuff useless in this demo
}
And my compute shader script :
#pragma kernel CSMain
RWStructuredBuffer<float3> Board : register(s[0]);
RWStructuredBuffer<float3> BoardOut : register(s[1]);
float time;
[numthreads(1,1,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
float valx = (sin((time*4)+Board[id.x].x));
float valz = (cos((time*2)+Board[id.x].z));
Board[id.x].y = (valx + valz)/5;
BoardOut[id.x] = Board[id.x];
}
At the beginning I was reading and writing from the same buffer, but as I had my issue I tried having separate buffers, but with no success. I still have the same problem.
Maybe I misunderstood the way compute shaders are supposed to be used (and I know I could use a vertex shader but I just want to try compute shaders for further improvements.)
To complete what I said, I suppose it is related with the way vertices are indexed in the Mesh.vertices Array.
I tried a LOT of different Blocks/Threads configuration but nothing seems to solve the issue combinations tried :
Block Thread
60,60,1 1,1,1
1,1,1 60,60,3
10,10,3 3,1,1
and some others I do not remember. I think the best configuration should be something with a good balance like :
Block : VertexCount,1,1 Thread : 3,1,1
About the closed volume: I'm not sure about that because with a Cube {8 Vertices} everything seems to move accordingly, but with a shape with an odd number of vertices, the first (or last did not checked that yet) seems to not be processed
I tried it with many different shapes but subdivided planes are the most obvious, one corner is always not moving.
EDIT :
After further study i found out that it is simply the compute shader which does not compute the last (not the first i checked) vertices of the mesh, it seems related to the buffer type, i still dont get why RWStructuredBuffer should be an issue or how badly i use it, is it reserved to streams? i cant understand the MSDN doc on this one.
EDIT : After resolution
The C# script :
using UnityEngine;
using System.Collections;
public class TreeObject : MonoBehaviour {
Mesh m;
public bool stopProcess = false;
MeshCollider coll;
public ComputeShader csFile;
Vector3[] arrayToProcess;
ComputeBuffer cbf;
ComputeBuffer cbfOut;
int vertexLength;
// Use this for initialization
void Awake() {
coll = gameObject.GetComponent<MeshCollider>();
m = GetComponent<MeshFilter>().mesh;
vertexLength = m.vertices.Length+3; //I add 3 because apparently
//vertexnumber is odd
//arrayToProcess = new Vector3[vertexLength];
arrayToProcess = m.vertices;
}
void Start () {
cbf = new ComputeBuffer(vertexLength,12);
cbfOut = new ComputeBuffer(vertexLength,12);
csFile.SetBuffer(0,"Board",cbf);
csFile.SetBuffer(0,"BoardOut",cbfOut);
}
// Update is called once per frame
void Update () {
csFile.SetFloat("time",Time.time);
cbf.SetData(m.vertices);
csFile.Dispatch(0,vertexLength,1,1);
cbfOut.GetData(arrayToProcess);
m.vertices = arrayToProcess;
coll.sharedMesh = m;
}
}
I had already rolled back to a
Blocks VCount,1,1
Before your answer because it was logic that i was using VCount*VCount so processing the vertices "square-more" times than needed.
To complete, you were absolutely right the Stride was obviously giving issues could you complete your answer with a link to doc about the stride parameter? (from anywhere because Unity docs are VOID and MSDN did not helped me to get why it should be 12 and not 32 (as i thought 32 was the size of a float3)
so Doc needed please
In the mean time i'll try to provide a flexible enough (generic?) version of this to make it stronger, and start adding some nice array processing functions in my shader...
I'm familiar with Compute Shaders but have never touched Unity, but having looked over the documentation for Compute Shaders in Unity a couple of things stand out.
The cbf and cbfOut ComputeBuffers are created with a stride of 32 (bytes?). Both your StructuredBuffers contain float3s which have a stride of 12 bytes, not 32. Where has 32 come from?
When you dispatch your compute shader you're requesting a two-dimensional dispatch (vertexLength,vertexLength, 1) but you're operating on a 1D array of float3s. You will end up with a race condition where many different threads think they're responsible for updating each element of the array. Although awful for performance, if you want a thread group size of [numthreads(1,1,1)] then you should dispatch (vertexLength, 1, 1) numbers of waves/wavefronts when calling Dispatch (ie, Dispatch (60,1,1) with numThreads(1,1,1)).
For best/better performance the number of threads in your thread group / wave should at least be a multiple of 64 for best efficiency on AMD hardware. You then need only dispatch ceil(numVertices/64) wavefronts and then simply insert some logic into the shader to ensure id.x is not out of bounds for any given thread.
EDIT:
The documentation for the ComputeBuffer constructor is here: Unity ComputeBuffer Documentation
While it doesn't explicitly say "stride" is in bytes, it's the only reasonable assumption.

Categories

Resources