Procedurally generated UVs - c#

I have a procedurally generated cube that has a hole(s) in it. The script will take a mesh (the white squares in image 1) and creates the shown holes based on the position of the squares (image 2) (in other words, the squares are pretty much randomly placed.) The problem I'm having is that the light on the mesh is completely messed up (image 3). Currently the Array of UVs for the mesh is an empty Vector2 array. I believe that the UV array the problem so then how can I get the list of UVs when the only information I can get on the mesh is the list of vertices and the list of triangles? Or if it's not a problem with the UVs, how can I fix this?
(In Unity C#)

If the problem has to do with the normals try:
mesh.RecalculateNormals();

Unless lighting is baked into the texture, it is more likely a problem with normals. You need to be more specific, show your shader code or how you generate vertices etc.

Related

How can i repeat a texture at its original scale on my cube in unity

So as you see in the picture, i made a texture repeat on a rectangle(its size is 40,10,60) but it repeat the same amount of time on every face,so depending of the size of the face the texture is stretched.
In the picture you see that on the top face the texture repeat correctly and keep its original size but on the other faces it is streched.
Is there a way to repeat the texture without changing its size ?
Thank you for your responses.
screen of the problem
Edit : this script in c# does exactly what i want but is there a way to do it without a script since it was done in 2017 ?
https://github.com/Dsphar/Cube_Texture_Auto_Repeat_Unity/blob/master/ReCalcCubeTexture.cs
Unfortunately you will probably need to create a new material with the same image for each different scale using the 'Tiling' attribute:
*** Edit #1
The x and y Tiling values need to proportional to the scale of the plane or it will stretch.
If the size of the mesh being textured is static, you can change its UVs in a 3d program. You could even change the UVs via script.
Another option would be to look into worldspace (triplanar) shaders. These texture based on world position rather than the vertices local position.
If you are using Shader Graph, look at the triplanar node.
https://docs.unity3d.com/Packages/com.unity.shadergraph#6.9/manual/Triplanar-Node.html

Walking up and down 2d slopes (pixel art)

I'm having an issue where I can't walk up slopes in unity. My art is pixel so I understand the slope is made with a gap of 1 pixel to create the actual slope, but I have no idea how to fix it so I can just walk up and down them normally. Currently, going down them makes it a little bouncy and going up them is impossible unless you jump. Any help would be appreciated! Here is what the slope looks like:
Edit: Collider looks like this but I don't know how to fix it:
Sprites automatically have a polygon collider created for them when imported into the project. This polygon collider drives the tilemap polygon collider shapes.
You can modify the physics shape for a sprite to smooth it out and remove this unwanted when going up a ramp. Custom Physics Shape documentation
Another important thing to note in your specific problem: Often when a character has a "box-like" shape, they will get snagged on edges and small collider deviations. Unless your game's playstyle is based around a box-shaped entity and interactions, it's usually recommended to use a rounded collider for the moving characters (like a 2d or 3d capsule collider).

Concave Mesh Triangulation With Known Boundary

I have a set of 3D points that (may) form a concave shape. They are already ordered clockwise. The resulting mesh will be (nearly) planar with some slight height adjustments.
What's the best algorithm for me to use in C# (Unity) to triangulate a mesh out of these points?
I would start from Triangle.NET open source project. You may need to inherit your own Vertex type to keep the Z values (triangulation is always performed on XY plane)

How would I go about converting my 3D maze to a 2D maze?

What I want is sort of like a mini-map. I've already constructed my algorithms for both the 3d maze and the 2d maze but I would I'm not sure how to convert the 3d one in a 2d equivalent. Here's my code my code from gist.github.
You can take a screenshot of a plane going through the current level the player's at and paint obstructing polygons black and the rest leave white. But first you would need to cut out all intersecting areas from that plane. Not sure you can do it easily enough in-game with XNA.
I bet it's easier to do manually in a 3D editor by removing all but current level and making a huge screenshot, saving it as that level's map, although if you're going to rotate the cube in all directions, you'll need to do that lots of times.
One other approach is to make a mini-copy of entire map divided into 3d matrix of cubes and draw desired 2d array selection.

Render with multiple textures in managed Direct3D 9

I am pretty new Direct3D and have been looking for a solution to my problem for a couple of days. Most of the tutorials I have seen that cover textures only use one texture. For my program I have multiple textures that map to a specific collection of vertices that make up my mesh.
My question is how do I load multiple textures into my scene? and how do I map a collection of vertices to only one texture?
For example if I had a mesh of a car and I had a collection of textures like:
Tyres.dds
Body.dds
Cabin.dds
Given the car, how do I map the vertices that make up the tyre to the tyres.dds texture, body to body.dds and cabin to cabin.dds. All these textures have to render not just one.
Any help would be greatly appreciated,
Thank You
Usually this is done via submeshes. That means that the mesh consists of several parts that are represented as e.g. triangle lists. Each submesh is assigned a material. This material can be defined as you need. It may include diffuse color, roughness and the texture.
So when rendering the mesh, you would basically iterate every submesh, send the material parameters to the graphics card and then draw it.
Another possible solution in DirectX 10 would be to extend the vertex declaration by a TextureIndex variable. Or you could use 3d texture coordinates. This way, you can send all textures as a texture array to the graphics card and draw the mesh with one draw call. However, texture arrays are not suppported in DirectX 9. So you can either stick to method 1 or try to emulate a texture array.

Categories

Resources