Material texture scale in lightweight render pipeline - c#

I'm using scaled planes to create the geometry for my level. These planes share a material, but because of the scaling, the texture on the plane is also stretched. Before switching to the lightweight render profile, I used this code to mitigate this problem:
var render = GetComponent<MeshRenderer>();
var scale = new Vector2(transform.localScale.x, transform.localScale.z);
render.material.SetTextureScale("_MainTex", scale);
render.material.SetTextureScale("_BumpMap", scale);
which created an instance of the material and changed the Tiling property to match the local scale of the object.
However, since switching to lightweight RP, the code doesn't scale the texture. When logging render.material.mainTextureScale to the debug console, it shows my updated values, but when I inspect the GameObject it is attached to, the Tiling property of the material hasn't changed.
The material uses the Lightweight Render Profile/Lit shader.
Is there any way I can fix this in my C# code? If not, is there any way to do it without changing the way I do levels?

Yes, I discovered it is possible. I looked into the shader and the problem is that you there is no "_MainTex" map, which is what the code is looking for.
To fix it, change all instances of "_MainTex" to "_BaseMap", which is the proper map.

Related

How can i repeat a texture at its original scale on my cube in unity

So as you see in the picture, i made a texture repeat on a rectangle(its size is 40,10,60) but it repeat the same amount of time on every face,so depending of the size of the face the texture is stretched.
In the picture you see that on the top face the texture repeat correctly and keep its original size but on the other faces it is streched.
Is there a way to repeat the texture without changing its size ?
Thank you for your responses.
screen of the problem
Edit : this script in c# does exactly what i want but is there a way to do it without a script since it was done in 2017 ?
https://github.com/Dsphar/Cube_Texture_Auto_Repeat_Unity/blob/master/ReCalcCubeTexture.cs
Unfortunately you will probably need to create a new material with the same image for each different scale using the 'Tiling' attribute:
*** Edit #1
The x and y Tiling values need to proportional to the scale of the plane or it will stretch.
If the size of the mesh being textured is static, you can change its UVs in a 3d program. You could even change the UVs via script.
Another option would be to look into worldspace (triplanar) shaders. These texture based on world position rather than the vertices local position.
If you are using Shader Graph, look at the triplanar node.
https://docs.unity3d.com/Packages/com.unity.shadergraph#6.9/manual/Triplanar-Node.html

Why is Unity Canvas Image always rendering over my gameObjects even though it is definitely behind them?

In my 2d Unity project, I have a Canvas with an Image that I want for a Background.
I have 2 gameObjects in front of this background. But no matter how much fiddling I do with Pos Z, Sorting Layers, or hierarchy sorting, the image is always in front of the objects.
Gif above shows in 3d mode that even though the image is clearly behind these objects, it will always appear over them if they overlap.
Hierarchy:
Main
Camera (Inspector: https://i.imgur.com/Q5a52cf.png)
BackgroundCanvas (Inspector: https://i.imgur.com/m9Pxr6B.png)
BackgroundImage (Inspector: https://i.imgur.com/jTx7pEW.png)
Object1 (Inspector: https://i.imgur.com/YcClEhk.png)
Object2
Any advice to rescue me from this madness is much appreciated.
Set the sprite renderer's transform z value to 0 instead of 100
If that does not solve, please specify camera properties also, so I can try to recreate the exact setup.
Try clicking on Layers -> edit layers, inside sorting layers you can change the order grabbing layer, everything upper appears behind in the camera.
You could create a layer called Object
Assign it to the game objects.
Create an object camera
culling mask -> object layer
depth bigger than you current main camera.
Set it to Projection -> Orthographic
Clear flags -> solid colors.
canvas Render Mode -> Screen Space - Camera and assign the Render Camera to be the Object Camera
Inspector tab of the object or background.
Sprite Renderer.
Additional Settings.
Sorting Layer.
change it to a different layer.
Had this same issue and was able to fix it with these steps:
In canvas settings change Screen Space Overlay to Screen Space Camera
Set the camera variable to the one you are using for your scene.
I figured out a workaround. I basically created a VisualElement inside the UI Builder and set a render texture to the background. Then I added an extra camera to my project to view all the sprites that needed to be on top. That camera feeds the render texture, so now everything that camera sees is forced to be on top of the UI Document as the background of that VisualElement. If you want control over the whole screen, just set the VisualElement position to absolute and max out its dimensions. If your game doesn't have a fixed aspect ratio it might cause some stretching, but other than that I cant really tell the difference. Sorting layers for the UI Documents are broken and unity needs to work on that. This is the best option I've found. Hope this helps.
I had the same problem and I fixed it by attaching the camera to the canvas which is screen space and finally changing the sorting layer of my object to -1.

Tiled object position doesn't match MonoGame position

I have an issue when importing Tiled maps with Object layers into a MonoGame project. Everything is pretty much working great; I can import the Tiled map into the project and render the map with the use of MonoGame.Extended.
My issue starts when I try to use an Object from an Object Layer in the Tiled map. I can access the object, and I can access its properties (position, etc) just fine. In fact, when accessing the object's properties, they output exactly as they should. Below are images to clarify:
This is the image of the object within the Tiled map editor
And below is the code where I import the map, collect the object's information, and assign that information (position, in this case) to another object (the player)
gameMaps.Add(MapBuilder.LoadTiledMap("TestMap"));
gameMapRenderer = new TiledMapRenderer(graphicsDevice);
var viewportAdapter = new BoxingViewportAdapter(GameInstance.Instance.Window, graphicsDevice, 1440 / 4, 810 / 4);
mainCamera = new Camera2D(viewportAdapter);
var player = new Player();
var playerStartPos = new Vector2();
foreach(var obj in gameMaps[0].ObjectLayers[0].Objects)
{
if (obj.Name.Equals("PlayerStart"))
{
playerStartPos = obj.Position;
break;
}
}
player.position = playerStartPos;
entities.Add(player);
mainCamera.LookAt(player.position);
But below is the image that demonstrates the issue:
Issue as an image
The sprite/character is the player object that is assigned the position, in case that wasn't clear.
To explain: I am able to get the object's position and assign it to my 'Player' object, and my player is then at the correct position per what the object's position is in the Tiled map editor. However, the Tiled map itself is actually rendering at an incorrect position- even though I never make any adjustments to the position of the map as a whole.
Some further details that might help:
In the Tiled map editor, the position x0,y0 is the utmost top tile of the map. However, in MonoGame, if I go the same position- the Tiled map is actually rendered/placed some dimensions below. This means that the positioning of the map is incorrect with what it should be.
Below is an image showing what that position is in MonoGame, and how it is not properly correlating with what that position is in Tiled.
MonoGame screenshot
After speaking a bit to the creator of Monogame.Extended, we narrowed down the issue. It was actually two things:
In the Tiled map editor, the grid system for isometric maps is different from the grid system Monogame uses to render everything. Tiled's grid system for isometric maps is a normal grid but rotated to match the isometric view, whereas Monogames is just a normal grid. This causes the position (x,y) of Objects in Tiled to be imported into Monogame with a position value that doesnt match with where it should actually be in Monogame- which explains the position being wrong in the image above. Monogame.Extended's creator took note on the bug but said a fix isnt certain because it would require a formula for converting an isometric Tiled position to a Monogame position.
Because my sprites had extra padding at the top, it was causing the overall map to render x many pixels lower than at what should be the proper 0 position. It had something to do with the map being generated in Monogame via the sprites dimensions, instead of the maps. Not a giant problem, honestly, since a workaround could be presumably made in a custom implementation of the TiledMapRenderer.
I can post the link to the Monogame.Extended discussion later.

Remapping colours on textures dynamically in unity

I'm making a RTS game in Unity. In such games, players usually can determine the unit allegiance using the unit color markings. And I'm now trying to implement system, that will remap purple color on the unit owner's color.
One idea was to determine color that will be used as mask and then recoloured to any color. It could be done using some hue distribution function:
Solution 1
I used funtion based on max(). You can see the plot there.
hue = min(hue, pow(Math.abs(hue-MASK_HUE),8)*5000000+RESULT_HUE)
This solution has two big flaws:
Purple can't be used (I don't like it anyway)
Only full colours are applicable (no brown, black, white...)
What you see above is just my fiddling. The actual project would run on Unity engine in C#.
Solution 2
My friend proposed diferent approach: every image should use a map - either faded or just true/false array - to map where should the colours be applied. I didn't try this yet, as it's more complicated to test ad-hoc.
Altering textures in unity
It seems that texture for material can be easily altered in Unity, so the question is:
Q: How should I implement the dynamic texture coloring (generating) in Unity? Is any of my methods the good way? How should I produce the new textures, using what functions?
Rather than full code, I need information about what functions should I use. I hope other users will also profit from the general info.
To help you answering, I can guess the process will have 3 important parts:
Somehow get the texture of a model material (we know just the GameObject here)
If it hasn't been recolored already, use some algorithm to change the image properly. We'll need some pixel manipulation here
Apply the texture back to the model material
If you go the texture manipulation route, you'll need to make an additional copy of the texture stored in memory for each color, and this will increase the "loading" time of your scene. You can access the texture of a GameObject with renderer.material.mainTexture used on the GameObject that has the renderer component. Then you can use all sorts of pixel manipulation options such as SetPixel, or SetPixels to do it in batches for better performance.
There is, however, a third option that I would recommend. If you write/modify a custom shader, you can perform the color replacement at render time without significantly decreasing performance. This can be accomplished by adding a step where you convert your color output from RGB to HSV, change the Hue and Saturation, and then convert back to HSV.
By making the Hue and Saturation external parameters, you should be able to use a full range of colors including whatever you used for your marker color.
This post from the Unity forums should help with the hue shift shader.

XNA - How to access a texture from .x model?

I am just getting into XNA programming and have been unable to figure out how can I access the texture from a ".x" model. I am using a custom shader to display my model (just a cube with a texture mapped on it) with the filters set to point. To do this I needed to pass the effect my texture file which needed to be imported separately from my model or else it would complain since it is included in my model as well. This works perfectly how I want it, however this isn't really an agreeable method when I have many different models with their own textures.
My question is:
How am I able to access the texture included in my model directly from it and send that to my shader? Or am I able to access it directly with HLSL?
What I have tried:
I have found posts saying that it can assigned to a texture variable with:
Texture2d texture = ((BasicEffect)model.Meshes[0].Effects[0]).Texture
When I tried this the game runs but the cubes are just black. I can see that the texture variable is holding info and has the right dimensions but I can't tell if it is correctly holding the actual image. When I used just the BasicEffect they rendered just fine with their texture.
Update:
I have managed to get this to work after a little bit of fiddling. My game loads in a few hundred of the same cube and upon creation of each it would try save the texture of the model using the code above and then go through the mesh parts and change the effects to my custom effect. I discovered that the first cube created would save the texture okay but any subsequent cubes created would complain that they can't be cast as a BasicEffect. This resulted in one textured cube and then a lot of black ones. I am guessing that when it reuses the same model over and over like that it will just use the one that was modified to use my custom effect which was done on the first instance of the cube. Is this normal? I have got them all to render as textured by changing the texture variable to static.
Please observe that you are assigning the texture of your model to a temporary Texture2D variable, and not setting the Texture present in the Effect currently tied to your mesh.
If you do the following:
Texture2D textureToSet = Content.Load<Texture2D>("MyTex");
//Keep in mind that this method requires a basic effect type and that only one
//effect is present on each mesh to work properly.
foreach(Mesh mesh in model.Meshes)
{
((BasicEffect)(mesh.Effects[0])).Texture = textureToSet;
}
The quirky stuff going on inside the foreach is simply that you are grabbing the effect, then casting it to a BasicEffect and using its Texture property to give it a new texture to draw when used. Please see the documentation and Shawn's blog for a more detailed introduction.
If anyone else is wondering about this as I was then saving the texture using:
Texture2d texture = ((BasicEffect)model.Meshes[0].Effects[0]).Texture
This does work but there is one thing to watch for which is what was causing me problems. If you change the effect of the model from the default BasicEffect for one instance it will be changed for every instance of the model created thereafter. So you will only be able to use the above code before you change the effect for the first time on a particular model.
I later found this book which describes exactly how to extract the texture and other information from a model: 3D Graphics with XNA Game Studio 4.0 by Sean James - Specifically chapter 2

Categories

Resources