I have created two textures for two images. Now I want to show this textures in opengl in an order that left part of image2, full image1, right part of image2. I have done like below. Image1 shows at the center of opengl screen. But left and right part of screen is not correct (which should show left part of image2 and right part of image2 respectively)
Vertex shader:
attribute vec3 a_position;
varying vec2 vTexCoord;
void main() {
vTexCoord = (a_position.xy + 1) / 2;
gl_Position = vec4(a_position,1);
}
Fragment shader:
precision highp float;
uniform sampler2D sTexture;//texture for image 1
uniform sampler2D sTexture1;//texture for image 2
varying vec2 vTexCoord;
void main () {
if ( vTexCoord.x<=0.25 )
gl_FragColor = texture2D (sTexture1, vec2(vTexCoord.x/2.0, vTexCoord.y));
else if ( vTexCoord.x>0.25 && vTexCoord.x<0.75 )
gl_FragColor = texture2D (sTexture, vec2(vTexCoord.x*2.0, vTexCoord.y));
else if(vTexCoord.x>=0.75 )
gl_FragColor = texture2D (sTexture1, vec2(vTexCoord.x*2.0-1.0, vTexCoord.y));
}
Compiling shaders:
void CreateShaders() {
/***********Vert Shader********************/
vertShader = GL.CreateShader(ShaderType.VertexShader);
GL.ShaderSource(vertShader, vertexShaderSource);
GL.CompileShader(vertShader);
/***********Frag Shader ****************/
fragShader = GL.CreateShader(ShaderType.FragmentShader);
GL.ShaderSource(fragShader, fragmentShaderSource);
GL.CompileShader(fragShader);
}
I want to show this textures in opengl in an order that left part of image2, full image1, right part of image2
If the texture coordinate is less than 0.25 then you want to show the rang [0, 0.5] of image2. You have to map the x component of the texture coordinate from [0.0, 0.25] to [0, 0.5]:
vec2( vTexCoord.x*2.0, vTexCoord.y );
If the texture coordinate is grater then you want to show the rang [0.5, 1] of image2. You have to map the x component of the texture coordinate from [0.75, 1.0] to [0.5, 1.0]:
vec2( (vTexCoord.x-0.75)*2.0 + 0.5, vTexCoord.y );
or
vec2( vTexCoord.x*2.0 - 1.0, vTexCoord.y );
If the texture coordinate is grater than 0.25 and less than 0.75 then you want to show the full range of image1. You have to map the x component of the texture coordinate from [0.25, 0.75] to [0.0, 1.0]:
vec2( vTexCoord.x*2.0 - 0.5, vTexCoord.y );
The shader code my look like this
precision highp float;
uniform sampler2D sTexture;//texture for image 1
uniform sampler2D sTexture1;//texture for image 2
varying vec2 vTexCoord;
void main ()
{
float u = vTexCoord.x*2.0;
float a = 0.0;
if( vTexCoord.x > 0.75 )
{
u -= 1.0;
}
else if ( vTexCoord.x >= 0.25 )
{
u -= 0.5;
a = 1.0;
}
vec4 color1 = texture2D(sTexture1, vec2(u, texCoord.y));
vec4 color2 = texture2D(sTexture, vec2(u, texCoord.y));
gl_FragColor = mix(color2, color1, a);
}
Extension to the answer:
if I want to show only a portion of right image,for example (0.6,0.8).
You want to map [0.75, 1.0] to [0.6, 0.8]:
[0.75, 1.0] to [0, 1]: u' = (u-0.75)/0.25
[0, 1] to [0.6, 0.8]: u'' = u'*0.2+0.6
This leads to:
u = (vTexCoord.x-0.75)*0.2/0.25 + 0.6;
when mapping [0.75, 1.0] to [0.6, 0.8] i wish to display the remain portion of [0.75, 1.0] with black/white color. when using (vTexCoord.x-0.75)*0.2/0.25 + 0.6 image with portion [.6,.8] is filled in [0.75, 1.0]
You have to convert the RGB color to grayscal. I recommend to use the a formula for Relative luminance:
float grayscale = dot(color1, vec3(0.2126, 0.7152, 0.0722));
color1.rgb = vec3(grayscale);
Related
Scenario
I'm using unity c# to re-invent a google-earth like experience as a project. New tiles are asynchronously loaded in from the web while a user pans the camera around the globe. So far I'm able to load in all the TMS tiles based on their x & y coordinates and zoom level. Currently I'm using tile x,y to try and figure out where the tile should appear on my earth "sphere" and it's becoming quite tedious, I assume because of the differences between Euler angles and quaternions.
I'm using the angle of Camera.main to figure out which tiles should be viewed at any moment (seems to be working fine)
I have to load / unload tiles for memory management as level 10 can receive over 1 million 512x512 tiles
I'm trying to turn a downloaded tile's x,y coordinates (2d) into a 3d position & rotation
Question
Using just the TMS coordinates of my tile (0,0 - 63,63) how can I calculate the tile's xyz "earth" position as well as its xyz rotation?
Extra
in the attached screenshot I'm at zoom level 4 (64 tiles)
y axis 0 is the bottom of the globe while y axis 15 is the top
I'm mostly using Mathf.Sin and Mathf.Cos to figure out position & rotation so far
** EDIT **
I've figured out how to get the tile position correct. Now I'm stuck on the correct rotation of the tiles.
The code that helped me the most was found with a question about generating a sphere in python.
I modified to the code to look like so:
// convenience helpers #jkr
float ti = tilesInfo["tilesXY"]; // basically the amount of tiles across either axis #jkr
float ti2 = ti / 2;
float pi = Mathf.PI;
float pi2 = pi / 2;
float pipi = pi * 2;
// position for 3d tiles #jkr
float phi = keyY / ti * pi;
float theta = keyX / ti * pipi;
x = Mathf.Sin(phi) * Mathf.Cos(theta) * ER;
y = Mathf.Sin(phi) * Mathf.Sin(theta) * ER;
z = Mathf.Cos(phi) * ER;
** EDIT 2 **
after adding #Ruzihm's answer to compute normals
** EDIT 3 **
after adding #Ruzihm's shader. I went on to make a number of tweaks to get things more situated and there's still a ways to go but at least this is big progress.
Instead of positioning and orienting the planes in C#, you can have the shader assign their position and orientation if you assign the latitude and longitude to each vertex, and also assign the sphere center and radius:
Shader "Custom/SquareBender" {
Properties{
_MainTex("Tex", 2D) = "" {}
_SphereCenter("SphereCenter", Vector) = (0, 0, 0, 1)
_SphereRadius("SphereRadius", Float) = 5
}
SubShader{
Pass {
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata {
float2 uv : TEXCOORD0;
float2 lonLat : TEXCOORD1;
};
struct v2f
{
float4 pos : SV_POSITION;
float3 norm : NORMAL;
float2 uv : TEXCOORD0;
};
float4 _SphereCenter;
float _SphereRadius;
v2f vert(appdata v)
{
v2f o;
float lon = v.lonLat.x;
float lat = v.lonLat.y;
fixed4 posOffsetWorld = fixed4(
_SphereRadius*cos(lat)*cos(lon),
_SphereRadius*sin(lat),
_SphereRadius*cos(lat)*sin(lon), 0);
float4 posObj = mul(unity_WorldToObject,
posOffsetWorld + _SphereCenter);
o.pos = UnityObjectToClipPos(posObj);
o.uv = v.uv;
o.norm = mul(unity_WorldToObject, posOffsetWorld);
return o;
}
sampler2D _MainTex;
float4 frag(v2f IN) : COLOR
{
fixed4 col = tex2D(_MainTex, IN.uv);
return col;
}
ENDCG
}
}
FallBack "VertexLit"
}
And you can assign data to the vertices like this:
// tileIndex is column/row/zoom of current tile
// uv is relative postion within tile
// (0,0) for bottom left, (1,1) top right
Vector2 GetLonLatOfVertex(Vector3Int tileIndex, Vector2 uv)
{
float lon, lat;
// Use tileIndex and uv to calculate lon, lat (in RADIANS)
// Exactly how you could do this depends on your tiling API...
return new Vector2(lon, lat);
}
// Call after plane mesh is created, and any additional vertices/uvs are set
// tileIndex is column/row/zoom of current tile
void SetUpTileLonLats(Mesh mesh, Vector3Int tileIndex)
{
Vector2[] uvs = mesh.uv;
Vector2[] lonLats= new Vector2[uvs.Length];
for (int i = 0; i < lonLats.Length; i++)
{
lonLats[i] = GetLonLatOfVertex(tileIndex, uvs[i]);
}
mesh.uv2 = lonLats;
}
The more vertices your plane has, the rounder your sphere will appear, although it will cause more distortion to the textures on the tiles. The tradeoff is up to you. Just be sure that if you procedurally add more vertices/triangles, you assign appropriate uvs to them.
Note that the positions of the vertices are assigned in the shader based on the lat/lon and have nothing to do with the object's Transform. If you have frustum culling on (which is on by default), ensure that the mesh component (centered on the transform, what you can see as wireframe in the scene view) is visible in the camera or unity will stop rendering it for the sake of ~efficiency~.
Example of using this process to draw a full sphere using tiles:
For a fast demonstration, create a new project, put this on the camera, and assign it with a material with the above shader and a texture to tile with.
It will create 16 planes with the same image, each plane encompassing 45 degrees of latitude and 90 degrees of longitude, wrapping them around a sphere. Assigning a different image to each plane is left as an exercise to the reader.
public class test : MonoBehaviour
{
[SerializeField] Material mat;
private void Start()
{
for (int i = 0 ; i < 16 ; i++)
{
int lonIndex = i % 4; // 0, 1, ..., 2, 3
int latIndex = i / 4 - 2; // -2, -2, ..., 1, 1
GameObject plane = GameObject.CreatePrimitive(PrimitiveType.Plane);
plane.GetComponent<MeshRenderer>().material = mat;
Vector3Int index = new Vector3Int(lonIndex, latIndex, 0);
SetUpTileLonLats(plane.GetComponent<MeshFilter>().mesh, index);
}
}
Vector2 GetLonLatOfVertex(Vector3Int tileIndex, Vector2 uv)
{
// Reminder: tileIndex goes from (0,-2) to (3,1)
// Needs to go from (0, -.5pi) to (2pi, .5pi) depending on uv & index
float lon = (tileIndex.x + uv.x) * 0.5f * Mathf.PI;
float lat = (tileIndex.y + uv.y) * 0.25f * Mathf.PI;
return new Vector2(lon, lat);
}
void SetUpTileLonLats(Mesh mesh, Vector3Int tileIndex)
{
Vector2[] uvs = mesh.uv;
Vector2[] lonLats = new Vector2[uvs.Length];
for (int i = 0; i < lonLats.Length; i++)
{
lonLats[i] = GetLonLatOfVertex(tileIndex, uvs[i]);
}
mesh.uv2 = lonLats;
}
}
For the positioning and rotation of the planes, you can do that in c#:
float x,y,z;
// ...
plane.transform.position = new Vector3(x,y,z);
// negative needed according to comments
Vector3 planeUp = new Vector3(x,y,-z);
Vector3 planeRight = Vector3.Cross(planeUp, Vector3.up);
Vector3 planeForward = Vector3.Cross(planeRight, planeUp);
plane.transform.rotation = Quaternion.LookRotation(planeForward, planeUp);
To make them bend into position is a lot harder, since it brings in the question of how to project each square onto a curved surface... How do you manage overlaps? Gaps? How can the edges of each plane be aligned?
Anyway, until that is decided, here's something to help visualize the issues. You can trace a line from each vertex of the quad towards the middle of the sphere and find the point along that line that's the same distance from the center as the center of the plane. Luckily this is doable in a shader you can attach to the plane. For the sake of brevity, this assumes the center of the sphere is at the world origin (0,0,0):
Shader "Custom/SquareBender" {
Properties{
_MainTex("Tex", 2D) = "" {}
}
SubShader {
Pass {
Tags {"LightMode" = "Always"}
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#include "UnityCG.cginc"
struct appdata {
float4 vertex : POSITION;
float2 uv : TEXCOORD0;
};
struct v2f
{
float4 pos : SV_POSITION;
float2 uv : TEXCOORD0;
};
v2f vert(appdata v)
{
v2f o;
// everything in obj space
float4 worldOrigin = mul(unity_WorldToObject,
float4(0,0,0,1));
float4 fromOriginToObj = float4(0,0,0,1) - worldOrigin;
float4 fromOriginToPos = v.vertex - worldOrigin;
float4 dirPos = normalize(fromOriginToPos);
float r = distance(fromOriginToObj);
o.pos = UnityObjectToClipPos(r*dirPos + worldOrigin);
o.uv = v.uv
return o;
}
sampler2D _MainTex;
float4 frag(v2f IN) : COLOR
{
fixed4 col = tex2D(_MainTex, IN.uv);
}
ENDCG
}
}
FallBack "VertexLit"
}
Example of using this method to place tiles on a sphere:
I made solar system with sun in the centre and planets rotating around him. I wanted to add skybox, it is added properly but I cant see my planets.
Before adding skybox:
After adding skybox:
main skybox code:
// skybox VAO
unsigned int skyboxVAO, skyboxVBO;
glGenVertexArrays(1, &skyboxVAO);
glGenBuffers(1, &skyboxVBO);
glBindVertexArray(skyboxVAO);
glBindBuffer(GL_ARRAY_BUFFER, skyboxVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(skyboxVertices), &skyboxVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
vector<std::string> faces
{
"res/textures/skybox/right.png",
"res/textures/skybox/left.png",
"res/textures/skybox/top.png",
"res/textures/skybox/bot.png",
"res/textures/skybox/front.png",
"res/textures/skybox/back.png"
};
unsigned int cubemapTexture = loadCubemap(faces);
window skybox code:
// draw skybox as last
glDepthFunc(GL_LEQUAL); // change depth function so depth test passes when values are equal to depth buffer's content
skyboxShader.use();
view = glm::mat4(glm::mat3(camera.GetViewMatrix())); // remove translation from the view matrix
skyboxShader.setMat4("view", view);
skyboxShader.setMat4("projection", projection);
// skybox cube
glBindVertexArray(skyboxVAO);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, cubemapTexture);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArray(0);
glDepthFunc(GL_LESS); // set depth function back to default
vertex shader and fragment shader:
#version 330 core
layout (location = 0) in vec3 aPos;
out vec3 TexCoords;
uniform mat4 projection;
uniform mat4 view;
void main()
{
TexCoords = aPos;
vec4 pos = projection * view * vec4(aPos, 1.0);
gl_Position = projection * view * vec4(aPos, 1.0);
}
#version 330 core
out vec4 FragColor;
in vec3 TexCoords;
uniform samplerCube skybox;
void main()
{
FragColor = texture(skybox, TexCoords);
}
I can display below image using opengl in my glControl. Is there any way to cut or hide particular area (say 50px from top and bottom) equally from both bottom and top of glControl? Below is the code I have used to calculate the size of glControl. Can I achieve it by changing values on view port?
private void OpenGL_Size(GLControl glControl, VideoCaptureDevice videoSource)//always in portrait mode
{
decimal RateOfResolution = (decimal)videoSource.VideoResolution.FrameSize.Width / (decimal)videoSource.VideoResolution.FrameSize.Height;
decimal screenHeightbyTwo = this._Screenheight / 2;
RateOfResolution = 1 / RateOfResolution;// portrait
openGLheight = Convert.ToInt32(screenHeightbyTwo); // height is fixed; calculate the width
openGLwidth = (Convert.ToInt32(RateOfResolution * screenHeightbyTwo));
glControl.Width = openGLwidth;
glControl.Height = openGLheight;
}
GL.Viewport(new Rectangle(0, 0, glControl.Width, glControl.Height));
Shader code
precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoordIn;
void main ()
{
vec4 color = texture2D(sTexture,vTexCoordIn);
gl_FragColor = color;
}
If you want to skip a some parts of the texture, then you can use the discard keyword. This command causes the fragment's output values to be discarded and the fragments are not drawn at all.
precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoordIn;
void main ()
{
if (vTexCoordIn.y < 0.1 || vTexCoordIn.y > 0.9)
discard;
vec4 color = texture2D(sTexture, vTexCoordIn);
gl_FragColor = color;
}
If the height of the image and the height of the area which has to be discarded is given, then the condition is:
float img_h_px = 432.0; // height of the image in pixel
float area_h_px = 50.0; // area height in pixel
float w = area_h_px/img_h_px;
if (vTexCoordIn.y < w || vTexCoordIn.y > (1.0-w))
discard;
I want to show selected area from second half of an image (This is the range from 0.5 to 1.0) in my glcontrol. For that I have used two variables rightsliderStartval(any value between 0.5 and 1.0)
and rightsliderEndval(any value between 1.0 and 0.5). I want exactly the selected area between this rightsliderStartval and rightsliderEndval. When trying like below selected area is getting but it get stretched.
decimal RateOfResolution = (decimal)videoSource.VideoResolution.FrameSize.Width / (decimal)videoSource.VideoResolution.FrameSize.Height;
int openGLwidth = (this._Screenwidth / 3) - 40;
int openGLheight = Convert.ToInt32(screenWidthbyThree / RateOfResolution);
glControl.Width = openGLwidth;
glControl.Height = openGLheight;
GL.Viewport(new Rectangle(0, 0, glControl.Width, glControl.Height));
public void CreateShaders()
{
/***********Vert Shader********************/
vertShader = GL.CreateShader(ShaderType.VertexShader);
GL.ShaderSource(vertShader, #"attribute vec3 a_position;
varying vec2 vTexCoordIn;
void main() {
vTexCoordIn=( a_position.xy+1)/2 ;
gl_Position = vec4(a_position,1);
}");
GL.CompileShader(vertShader);
/***********Frag Shader ****************/
fragShader = GL.CreateShader(ShaderType.FragmentShader);
GL.ShaderSource(fragShader, #"precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoordIn;
void main ()
{
vec2 vTexCoord=vec2(vTexCoordIn.x,vTexCoordIn.y);
float rightsliderStartval=0.6;//0.5 to 1.0
float rightsliderEndval=0.8;//1.0 to 0.5
float rightsliderDelta=rightsliderEndval-rightsliderStartval;
if (vTexCoordIn.x < 0.5)
discard;
float u = mix(rightsliderStartval, rightsliderEndval, (vTexCoordIn.x-0.5) * 2.0);
vec4 color = texture2D(sTexture, vec2(u, vTexCoordIn.y));
gl_FragColor = color;
}");
GL.CompileShader(fragShader);
}
In Screenshot, White line represent center of image. I Want to show area between yellow and orange line.
If you want to skip a some parts of the texture, then you can use the discard keyword. This command causes the fragment's output values to be discarded and the fragments are not drawn at all.
If you've a rectangular area and you want to draw only in the 2nd half of the rectangular area, then you've to discard the fragments in the 1st half:
if (vTexCoordIn.x < 0.5)
discard;
If you want to draw the range from rightsliderStartval to rightsliderEndval in the 2nd half of the rectangular area, then you have to map the rang [0.5, 1.0] incoming texture coordinate vTexCoordIn.x to [rightsliderStartval, rightsliderEndval]:
float w = (vTexCoordIn.x-0.5) * 2.0; // [0.5, 1.0] -> [0.0, 1.0]
float u = mix(rightsliderStartval, rightsliderEndval, w); // [0.0, 1.0] -> [0.7, 0.9]
This leads to the fragment shader:
precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoordIn;
void main ()
{
float rightsliderStartval = 0.7;
float rightsliderEndval = 0.9;
if (vTexCoordIn.x < 0.5)
discard;
float u = mix(rightsliderStartval, rightsliderEndval, (vTexCoordIn.x-0.5) * 2.0);
vec4 color = texture2D(sTexture, vec2(u, vTexCoordIn.y));
gl_FragColor = color;
}
If you don't want that the image is stretched then you've 2 possibilities.
Either discard the region from 0.0 to 0.7 and 0.9 to 1.0:
precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoordIn;
void main ()
{
float rightsliderStartval = 0.7;
float rightsliderEndval = 0.9;
if (vTexCoordIn.x < 0.7 || vTexCoordIn.x > 0.9)
discard;
vec4 color = texture2D(sTexture, vTexCoordIn.xy));
gl_FragColor = color;
}
Or scale the image in the y direction, too:
precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoordIn;
void main ()
{
float rightsliderStartval = 0.7;
float rightsliderEndval = 0.9;
if (vTexCoordIn.x < 0.5)
discard;
float u = mix(rightsliderStartval, rightsliderEndval, (vTexCoordIn.x-0.5) * 2.0);
float v_scale = (rightsliderEndval - rightsliderStartval) / 0.5;
float v = vTexCoordIn.y * v_scale + (1.0 - v_scale) / 2.0;;
vec4 color = texture2D(sTexture, vec2(u, v));
gl_FragColor = color;
}
I have an issue, and i need help to resolve it.
I am try to colorize some parts of sprite dynamically by custom shader and additional textures as masks. It is not a problem and i solve it easy.
Screen for better understanding:
This is real implementation and color blending for thirs image was realized by shader, sprite and mask.
But I am been forced to use atlases for saving RAM cause have to many sprites to load at the same time (3 states and 5 directions for each of varyous units). And this was caused problems. When i try to do this in edit mode (by using ExecuteInEditMode) all works fine. But when i press "play" button, all breaks down.
How it's look like:
As i understand, problem is that when I press "play" atlas was builded. So when I get sprite from it I get it susseccfully. But when I try to get texture from Sprite for seting it to Shader I have got big texture (full atlas). Shader do not know nothing about actual position on this sheet for cheking is pixel need to colorizing.
So my question are: how can i get "small" texture from sprite to set it to shader?
How I set "mask" to shader:
public void UpdateMask(Texture tex)
{
//Debug.LogFormat("UpdateMask {0}", tex);
m_SRenderer.GetPropertyBlock(m_SpriteMbp);
m_SpriteMbp.SetTexture("_Mask", tex);
m_SRenderer.SetPropertyBlock(m_SpriteMbp);
}
Some shader's pieces:
Properties
{
[PerRendererData] _MainTex("Sprite Texture (RGB)", 2D) = "white" {}
[PerRendererData] _Mask("Alpha (A)", 2D) = "white" {}
_FriendlyColor("FriendlyColor", Color) = (1,1,1,1)
_EnemyColor("EnemyColor", Color) = (1,1,1,1)
_NeutralColor("NeutralColor", Color) = (1,1,1,1)
_Intencity("BlendingIntencity", Range(0, 1)) = 0.5
[PerRendererData] _IsFriendly("IsFriendly", Float) = 0
[PerRendererData] _IsHighlight("Outline", Range(0, 1)) = 0
[PerRendererData] _HighlightColor("Outline Color", Color) = (1,1,1,1)
}
fixed4 frag(v2f IN) : COLOR
{
fixed4 mainTex = tex2D(_MainTex, IN.texcoord) * IN.color;
fixed4 alphaMask = tex2D(_Mask, IN.texcoord) * IN.color;
fixed4 output;
fixed4 blendColor;
if (alphaMask.a > 1.0e-6)
{
if (_IsFriendly == 4)
{
blendColor = fixed4(_NeutralColor.r, _NeutralColor.g, _NeutralColor.b, alphaMask.r);
}
else
{
if (_IsFriendly == 1)
{
blendColor = fixed4(_FriendlyColor.r, _FriendlyColor.g, _FriendlyColor.b, alphaMask.r);
}
else
{
blendColor = fixed4(_EnemyColor.r, _EnemyColor.g, _EnemyColor.b, alphaMask.r);
}
}
output = BlendOverelay(mainTex, blendColor * _Intencity);
}
else
{
output = mainTex;
output.rgb *= output.a;
}
if (_IsHighlight != 0)
{
fixed4 blendedColor = BlendAdditive(output, _HighlightColor);
blendedColor.a = output.a;
blendedColor.rgb *= output.a;
output = blendedColor;
}
return output;
}
You need to tell the sprite renderer where its position in the atlas is and how large the atlas is, so that it can convert the IN.texcoord UV in atlas space to the corresponding UV in sprite space. Then, you can sample from the alpha map using the sprite space UV
In C#, set the atlas offset & scale information to e.g., _AtlasPosition:
public void UpdateMask(Texture tex)
{
//Debug.LogFormat("UpdateMask {0}", tex);
m_SRenderer.GetPropertyBlock(m_SpriteMbp);
m_SpriteMbp.SetTexture("_Mask", tex);
Vector4 result = new Vector4(sprite.textureRect.position.x, sprite.textureRect.position.y, sprite.textureRect.size.x, sprite.textureRect.size.y)
m_SpriteMbp.SetVector("_AtlasPosition", result)
m_SRenderer.SetPropertyBlock(m_SpriteMbp);
}
In shader, calculate the current UV in sprite space, and use it to sample from _Mask:
fixed4 frag(v2f IN) : COLOR
{
fixed4 mainTex = tex2D(_MainTex, IN.texcoord) * IN.color;
// multiply both the position offset and size by the texel size to bring them into UV space
float4 atlasOffsetScale = _AtlasPosition * _MainTex_TexelSize.xyxy;
// apply UV position offset and scale, sample from alpha mask
fixed4 alphaMask = tex2D(_Mask, (IN.texcoord - atlasOffsetScale.xy) / atlasOffsetScale.zw) * IN.color;
fixed4 output;
fixed4 blendColor;
// ...
You'll have to declare _MainTex_TexelSize in your shader if you haven't already.
This will not work if you are using tight packing. For Sprite Packer, you will need to specify DefaultPackerPolicy in the sprite packer or specify [RECT] in the packing tag. If you are using SpriteAtlas, you will need to disable Tight Packing.
Code sourced from this thread on the unity forums