I want to create second UV set, and then move each UV vertex in an object by vector u=0, v=1.0/number of vertices. The new UV vertex coordinates created for 4 vertices plane should go like this: for vertex 0 (u=0,v=0), for vertex 1 (u=0,v=0.25), for vertex 2 (u=0,v=0.5), for vertex 3 (u=0,v=0.75), etc.
I have a source code in C# :
Vector2[] UV2 = new Vector2[m.vertexCount];
float HalfTexelSize = (1f/ (float)m.vertexCount)/2f;
for (int i = 0; i < m.vertexCount; i++) {
UV2[i] = new Vector2(0f, (float)i / (float)m.vertexCount) + new Vector2(0f, HalfTexelSize);
}
m.uv2 = UV2;
meshFilter.mesh = m;
As far as my research goes there is no vectors in Python, and now I'm stuck with the solution. So far I came up this:
import maya.cmds as cmds
cmds.polyUVSet(create=True, uvSet='map2')
vertexCount = cmds.polyEvaluate(v=True)
vertexCount_float = float(vertexCount)
HalfTextureSize = (1.0/vertexCount/2.0)
x = 1.0/vertexCount
sel = cmds.ls(sl=1, fl=1)
for i in sel:
i=0, i<sel
cmds.polyEditUV(uValue=0.0, vValue=x)
But the output I get is the second UV set with every vertex in (0,0) UV coordinates. Can anyone help me? Any MEL/Python solution would be appreciated.
There's no vectors in default maya pyhon.
If you want a vanilla python-only solution this project is a single-file vector library.
You can also get 3-vectors from maya directly using the API:
from maya.api.OpenMaya import MVector
v1 = MVector(1,2,3)
v2 = MVector(3,4,5)
print v1 + v2
These are 3d vectors but you can just ignore the 3d component
For actually creating and editing the UV set, there are a couple of complexities to consider
Just creating a new UV set gives you an empty set; the set exists but it won't contain any UVs. The relationship between verts and UVs is not always 1:1, you can have more than one UVs for a physical vert and you can have un-mapped vertices. To create a 1-uv-per-vertex mapping it's easiest to just apply a simple planar or sphere map.
You'll need to set the 'current uv set' so maya knows which set you want to work on. Unfortunately when you ask maya for UV set names, it gives you unicode, but when you set them it expects strings (at least in 2016 - I think this is a bug)
You don't really need a vector to set this if your current code is an indication.
The final solution will look something like this:
import maya.cmds as cmds
# work on only one selected object at a time
selected_object = cmds.ls(sl=1, fl=1)[0]
vertexCount = cmds.polyEvaluate(selected_object, v=True)
# add a projection so you have 1-1 vert:uv mapping
cmds.polyPlanarProjection(selected_object, cm=True)
# get the name of the current uv set
latest_uvs = cmds.polyUVSet(selected_object, q=True, auv=True)
# it's a unioode by default, has to be a string
new_uv = str(latest_uvs[-1])
# set the current uv set to the newly created one
cmds.polyUVSet(selected_object, e=True, cuv = True, uvs=new_uv)
vert_interval = 1.0 / vertexCount
for vert in range (vertexCount):
uv_vert = "{}.map[{}]".format (selected_object, vert)
cmds.polyEditUV( uv_vert, u= 0, v = vert * vert_interval, relative=False, uvs=new_uv)
Related
I need a little help with maths for drawing lines between 2 points on a sphere. I have a 3d globe and some markers on it. I need to draw curved line from point 1 to point 2. I managed to draw lines from point to point with LineRenderer, but they are drawn with the wrong angle and I can't figure out, how to implement lines that go at the right angle. The code by far:
public static void DrawLine(Transform From, Transform To){
float count = 12f;
LineRenderer linerenderer;
GameObject line = new GameObject("Line");
linerenderer = line.AddComponent<LineRenderer>();
var points = new List<Vector3>();
Vector3 center = new Vector3(
(From.transform.position.x + To.transform.position.x) / 2f,
(From.transform.position.y + To.transform.position.y) ,
(From.transform.position.z + To.transform.position.z) / 2f
);
for (float ratio = 0; ratio <= 1; ratio += 1 / count)
{
var tangent1 = Vector3.Lerp(From.position, center, ratio);
var tangent2 = Vector3.Lerp(center, To.position, ratio);
var curve = Vector3.Lerp(tangent1, tangent2, ratio);
points.Add(curve);
}
linerenderer.positionCount = points.Count;
linerenderer.SetPositions(points.ToArray());
}
So what I have now is creepy lines rising above along y axis:
What should I take into account to let lines go along the sphere?
I suggest you to find the normal vector of your two points with a cross product (if your sphere is centered at the origin) and then normalize it to use it as a rotation axis for a rotation using quaternions. To make the interpolations, you can simply rotate the first point around this vector with an angle of k * a where k is a parameter from 0 to 1 and a is the angle between your first two vectors which you can find with the acos() of the dot product of your two normalized points
EDIT : I thought about a much easier solution (again, if the sphere is centered) : you can do a lerp between your two vectors and then normalize the result and multiply it by the radius of the sphere. However, the spacings between the resulting points wont be constant, especially if they are far from each other.
EDIT 2 : you can fix the problem of the second solution by using a function instead of a linear parameter for the lerp : f(t) = sin(t*a)/sin((PI+a*(1-2*t))/2)/dist(point1, point2) where a is the angle between the two points.
So I am using a compute shader in unity to find groups of vertices that overlap (are the same as) other groups of vertices in a Vector3[].
I have a List < List < int > > called faces. Each List in the faces list is a group of indexes that points to the triangles array in a Mesh. This is so I can have multiple triangles that represent an N sided face.
After I do some C# preparation to get that faces matrix into something the GPU can understand, I send over all the mesh and faces data to the Compute Shader:
This code is for those who need to see the whole thing. I get more specific in the next section.
public void ClearOverlap()
{
//faces
ComputeBufferMatrix matrix = ListListIntToComputeBufferMatrix(faces);
GlobalProperties.findOverlappingFaces.SetInt("width", matrix.width);
GlobalProperties.findOverlappingFaces.SetInt("height", matrix.height);
GlobalProperties.findOverlappingFaces.SetBuffer(0, "faces", matrix.computeBuffer);
ComputeBuffer facesSharing = new ComputeBuffer(matrix.width * matrix.height, sizeof(int));
int[] facesSharingArray = new int[matrix.width * matrix.height];
facesSharing.SetData(facesSharingArray);
GlobalProperties.findOverlappingFaces.SetBuffer(0, "facesSharing", facesSharing);
//vertices
ComputeBuffer vertices = new ComputeBuffer(mesh.vertices.Length, sizeof(float) * 3);
vertices.SetData(mesh.vertices);
GlobalProperties.findOverlappingFaces.SetBuffer(0, "vertices", vertices);
//triangles
ComputeBuffer triangles = new ComputeBuffer(mesh.triangles.Length, sizeof(int));
vertices.SetData(mesh.triangles);
GlobalProperties.findOverlappingFaces.SetBuffer(0, "triangles", triangles);
//output
ComputeBuffer output = new ComputeBuffer(matrix.height, sizeof(float) * 3);
Vector3[] outputInfo = new Vector3[matrix.height];
output.SetData(outputInfo);
GlobalProperties.findOverlappingFaces.SetBuffer(0, "returnFaces", output);
//dispatch
GlobalProperties.findOverlappingFaces.Dispatch(0, matrix.height, 1, 1);
//clear out overlapping faces
output.GetData(outputInfo);
for(int i = 0; i < outputInfo.Length; i+=1)
{
Debug.Log(outputInfo[i]);
}
Debug.Log(mesh.vertices.Length);
Debug.Log(mesh.vertices[1000]);
Debug.Log(outputInfo[0].x);
Debug.Log(outputInfo[0].y);
Debug.Log(outputInfo[1000].z);
//dispose buffers
matrix.computeBuffer.Dispose();
vertices.Dispose();
triangles.Dispose();
output.Dispose();
facesSharing.Dispose();
}
Here is the Next Section
This next piece of code is the part where I send the mesh vertices to the computebuffer which will then set the RWStructuredBuffer in the compute shader called "vertices"
//vertices
ComputeBuffer vertices = new ComputeBuffer(mesh.vertices.Length, sizeof(float) * 3);
vertices.SetData(mesh.vertices);
GlobalProperties.findOverlappingFaces.SetBuffer(0, "vertices", vertices);
At no point am I changing the values of the vertices after I set them in the compute buffer. And before this point the values are perfectly fine. In fact even after the compute shader is dispatch the values are still fine. Only when they are in the compute shader are they bad.
I am expecting values like 3.5 and 0.5 and 10.5 (I am working with block points so they are 0.5 off of the grid). However I am getting values like 2.8054E-42, 3.4563E-45... etc.
Important Note
I am not changing the values in the Compute Shader. I am literally just writing them back out into another RWStructuredBuffer that is set to a different (But same sized) Vector3[] as the "mesh.vertices" .
Summary
Vector3[] gets changed while going into the compute shader. Not before and not after.
Can anyone shed some light on this issue because I have search sooo many forums and no one seems to have experienced this issue.
PS
I even thought that maybe the bits of the floating points were being flipped during the dispatch or something so I went to this website (https://www.h-schmidt.net/FloatConverter/IEEE754.html) to see what the value is when I unflip them but its not what's happening.
Here Is the Compute Shader Code
The RWStructuredBuffer vertices are the vertices and the RWStructuredBuffer returnFaces is the other Vector3[] that returns what is in the vertices array. Its literally just sending values in and getting the same values out. But they are not the same value which is the problem!!!
#pragma kernel CSMain
int width;
int height;
RWStructuredBuffer<int> faces;
RWStructuredBuffer<int> triangles;
RWStructuredBuffer<float3> vertices;
RWStructuredBuffer<int> facesSharing;
RWStructuredBuffer<float3> returnFaces;
[numthreads(1,1,1)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
returnFaces[id.x] = vertices[id.x];
}
The Problem
Hi, I'm basically trying to do the same thing as described here:
Unity Intersections Mask
With the caveat that the plane isn't exactly a plane but a (very large relative to the arbitrary 3D object) 3D Cone, and the camera I'm using has to be an orthographic camera (so no deferred rendering).
I also need to do this basically every frame.
What I tried
I've tried looking up various intersection depth shaders but they all seem to be done with the perspective camera.
Even then they don't render the non-intersecting parts of the 3D objects as transparent, instead coloring parts of them differently.
The linked stackoverflow question mentions rendering the plane normally as an opaque object, and then using a fragment shader to render only the part of objects that intersect the plane.
However based on my (admittedly) very limited understanding of shaders, I'm uncertain of how to get around to doing this - as far as I know each fragment only has 1 value as it's depth, which is the distance from the near-clipping plane of the camera to the point on the object closest to the camera that is shown by that fragment/pixel.
Since the rest of the object is transparent in this case, and I need to show parts of the object that would normally be covered(and thus, from what I understand, depth not known), I can't see how I could only draw the parts that intersect my cone.
I've tried the following approaches other than using shaders:
Use a CSG algorithm to actually do a boolean intersect operation between the cone and objects and render that.
Couldn't do it because the CSG algorithms were too expensive to do every frame.
Try using the contactPointsfrom the Collision generated by Unity to extract all points(vertices) where the two meshes intersect and construct a new mesh from those points
This led me down the path of 3D Delaunay triangulation, which was too much for me to understand, probably too expensive like the CSG attempt, and I'm pretty sure there is a much simpler solution to this problem given that I'm just missing here.
Some Code
The shader I initially tried using(and which didn't work) was based off code from here:
https://forum.unity.com/threads/depth-buffer-with-orthographic-camera.355878/#post-2302460
And applied to each of the objects.
With the float partY = i.projPos.y + (i.projPos.y/_ZBias); modified without the hard-coded _ZBias correction factor(and other color-related values slightly changed).
From my understanding, it should work since it seems to me like it's comparing the depth buffer and the actual depth of the object and only coloring it as the _HighlightColor when the two are sufficiently similar.
Of course, I know almost nothing about shaders, so I have little faith in my assessment of this code.
//Highlights intersections with other objects
Shader "Custom/IntersectionHighlights"
{
Properties
{
_RegularColor("Main Color", Color) = (1, 1, 1, 0) //Color when not intersecting
_HighlightColor("Highlight Color", Color) = (0, 0, 0, 1) //Color when intersecting
_HighlightThresholdMax("Highlight Threshold Max", Float) = 1 //Max difference for intersections
_ZBias("Highlight Z Bias", Float) = 2.5 //Balance out the Z-axis fading
}
SubShader
{
Tags { "Queue" = "Transparent" "RenderType"="Transparent" }
Pass
{
Blend SrcAlpha OneMinusSrcAlpha
ZWrite Off
Cull Off
CGPROGRAM
#pragma target 3.0
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
uniform sampler2D _CameraDepthTexture; //Depth Texture
uniform float4 _RegularColor;
uniform float4 _HighlightColor;
uniform float _HighlightThresholdMax;
uniform float _ZBias;
struct v2f
{
float4 pos : SV_POSITION;
float4 projPos : TEXCOORD1; //Screen position of pos
};
v2f vert(appdata_base v)
{
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.projPos = ComputeScreenPos(o.pos);
return o;
}
half4 frag(v2f i) : COLOR
{
float4 finalColor = _RegularColor;
//Get the distance to the camera from the depth buffer for this point
float sceneZ = tex2Dproj(_CameraDepthTexture, UNITY_PROJ_COORD(i.projPos)).r * 400;
//Actual distance to the camera
float partY = i.projPos.y;// + (i.projPos.y/_ZBias);
//If the two are similar, then there is an object intersecting with our object
float diff = (abs(sceneZ - partY)) / _HighlightThresholdMax;
if (diff <= 1)
{
finalColor = _HighlightColor;
}
half4 c;
c.r = finalColor.r;
c.g = finalColor.g;
c.b = finalColor.b;
c.a = (diff<=1)? 1.0f: 0.0f;
return c;
}
ENDCG
}
}
FallBack "VertexLit"
The result of the (erroneous) code above is that the object always becomes transparent, regardless of whether or not it intersects the cone:
(The object is fully transparent even though it intersects the cone(pic taken from Scene View at runtime))
Ultimately it just seems to me like it comes back to shaders. How would I get around to achieving this effect? It doesn't necessarily have to be with shaders, anything that works is fine for me tbh. An example code would be great.
I'm making a post-processing shader (in unity) that requires world-space coordinates. I have access to the depth information of a certain pixel, as well as the onscreen location of that pixel. How can I find the world position that that pixel corresponds to, much like the function ViewportToWorldPos()?
It's been three years! I was working on this recently, and an older engineer help me solved the problem. Here is the code.
We need to firstly give a camera transform matrix to the shader in script:
void OnRenderImage(RenderTexture src, RenderTexture dst)
{
Camera curruntCamera = Camera.main;
Matrix4x4 matrixCameraToWorld = currentCamera.cameraToWorldMatrix;
Matrix4x4 matrixProjectionInverse = GL.GetGPUProjectionMatrix(currentCamera.projectionMatrix, false).inverse;
Matrix4x4 matrixHClipToWorld = matrixCameraToWorld * matrixProjectionInverse;
Shader.SetGlobalMatrix("_MatrixHClipToWorld", matrixHClipToWorld);
Graphics.Blit(src, dst, _material);
}
Then we need the depth information to transform the clip pos. Like this:
inline half3 TransformUVToWorldPos(half2 uv)
{
half depth = tex2D(_CameraDepthTexture, uv).r;
#ifndef SHADER_API_GLCORE
half4 positionCS = half4(uv * 2 - 1, depth, 1) * LinearEyeDepth(depth);
#else
half4 positionCS = half4(uv * 2 - 1, depth * 2 - 1, 1) * LinearEyeDepth(depth);
#endif
return mul(_MatrixHClipToWorld, positionCS).xyz;
}
That's all.
have a look at this tutorial: http://flafla2.github.io/2016/10/01/raymarching.html
essentially:
store one vector per corner of the screen, passed as constants, that goes from the camera position to said corner.
interpolate the vectors based on the screen space position, or the uvs of your screenspace quad
compute final position as cameraPosition + interpolatedVector * depth
I have a point in space represented by a 4x4 matrix. I'd like to get the screen coordinates for the point. Picking appears to be the exact opposite fo what I need. I'm using the screen coordinate to determine where to draw text.
Currently the text I draw is floating in space far in front of the points. I've attached a screenshot of zoomed-in and zoomed-out to better explain. As you can see in the screenshot, the distance between each point is the same when zoomed in, when it should be smaller.
Am I missing a transformation? World coordinates consider 0,0,0 to be the center of the grid. I'm using SlimDX.
var viewProj = mMainCamera.View * mMainCamera.Projection;
//Convert 4x4 matrix for point to Vector4
var originalXyz = Vector3.Transform(Vector3.Zero, matrix);
//Vector4 to Vector3
Vector3 worldSpaceCoordinates = new Vector3(originalXyz.X, originalXyz.Y, originalXyz.Z);
//Transform point by view projection matrix
var transformedCoords = Vector3.Transform(worldSpaceCoordinates, viewProj);
Vector3 clipSpaceCoordinates = new Vector3(transformedCoords.X, transformedCoords.Y, transformedCoords.Z);
Vector2 pixelPosition = new Vector2((float)(0.5 * (clipSpaceCoordinates.X + 1) * ActualWidth), (float)(0.5 * (clipSpaceCoordinates.Y + 1) * ActualHeight));
Turns out I was way overthinking this. Just project the point to the screen by passing Vector3.Project your viewport information. It's a 3 line solution.
var viewProj = mMainCamera.View * mMainCamera.Projection;
var vp = mDevice.ImmediateContext.Rasterizer.GetViewports()[0];
var screenCoords = Vector3.Project(worldSpaceCoordinates, vp.X, vp.Y, vp.Width, vp.Height, vp.MinZ, vp.MaxZ, viewProj);