OpenGL3 Rotate object around itself center - c#

Ill try to rotate object in openGL (using OpenTK framework), but he rotate around zero point. Its logicly that object rotating around center, but i dont known how shoud hes rotating aroud itself center (or other point)
public static void Texture(Region region, float x, float y, float z, float rotateX, float rotateY, float rotateZ)
=> Texture(
region,
Matrix4.CreateTranslation(x, y, z),
Matrix4.CreateRotationX(rotateX) * Matrix4.CreateRotationY(rotateY) * Matrix4.CreateRotationZ(rotateZ),
TestGame.Camera.GetViewMatrix(),
TestGame.Camera.GetProjectionMatrix()
);
public static void Texture(Region region, Matrix4 translate, Matrix4 model, Matrix4 view, Matrix4 projection)
{
Shaders.TextureShader.Texture(TextureUnit.Texture0); //Bind texture as 0 in shader
region.Reference.Use(); //Bind texture as 0
Shaders.TextureShader.Matrix4("translate", translate);
Shaders.TextureShader.Matrix4("model", model);
Shaders.TextureShader.Matrix4("view", view);
Shaders.TextureShader.Matrix4("projection", projection);
Shaders.TextureShader.Draw();
}
#version 330 core
layout(location = 0) in vec3 aPosition;
layout(location = 1) in vec2 aTexCoord;
out vec2 texCoord;
uniform mat4 translate;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main(void)
{
texCoord = aTexCoord;
gl_Position = vec4(aPosition, 1) * translate * model * view * projection;
}

Matrix multiplications are not Commutative, the order mattes. Rotate the object before translating it:
gl_Position = vec4(aPosition, 1) * translate * model * view * projection;
gl_Position = vec4(aPosition, 1) * model * translate * view * projection;

Related

encoding vertex positions into textures, positions dont match and index is always zero

I am trying to make a tool to encode vertex positions into a texture. The tool takes a sequence of Wavefront obj files and exports 2 textures. I am, for the most part, following this guide. I am using C# and Veldrid for my program. My program also shows a preview to see what the result looks like. I am having trouble getting my preview to use the textures correctly. The textures have the below mapping.
Texture 1:
RG - X Position
BA - Y Position
Texture 2:
RG - Z Position
BA - Normals, eventually haven't gotten their yet.
I have two issues. My first issue is the decoded position is not being decoded correctly. The second issue is that gl_VertexIndex seems to always be zero.
For my first issue, in order to see what was going on, I set the texture coords for the texture to 0, 0 to sample the first vertex of the first frame. I also removed any view transformation so that I could see the actual values in renderdoc.
In Renderdoc, the VS_Input is 11.67803, 1.00, -11.06608 and the VS_Out is 5.75159, 1.99283, -5.03286. When using gl_VertexIndex, all the vertices for VS_Out read the same thing.
#version 450
layout(location = 0) in vec3 Position;
layout(location = 1) in vec3 Normal;
layout(location = 2) in vec2 TexCoords;
layout(location = 3) in uint Id;
layout(location = 0) out vec3 outNormal;
layout(location = 1) out vec4 outDebug;
layout(set = 0, binding = 0) uniform MVP {
mat4 Model;
mat4 View;
mat4 Projection;
};
layout(set=0, binding=1) uniform sampler textureSampler;
layout(set=0, binding=2) uniform texture2D posTex;
layout(set=0, binding=3) uniform texture2D normalTex;
float RemapRange(float value, float from1, float to1, float from2, float to2){
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}
float DecodeFloatRG (vec2 enc){
vec2 kDecodeDot = vec2 (1.0, 1 / 255.0);
return dot(enc, kDecodeDot);
}
void main(){
outDebug = Projection * View * Model * vec4(Position, 1.0f);
vec2 coords = vec2(0, 0);
vec4 pos = textureLod(sampler2D(posTex, textureSampler), coords, 0);
vec4 normal = textureLod(sampler2D(normalTex, textureSampler), coords, 0);
vec3 decodedPos;
decodedPos.x = DecodeFloatRG(pos.xy);
decodedPos.y = DecodeFloatRG(pos.zw);
decodedPos.z = DecodeFloatRG(normal.xy);
float x = RemapRange(decodedPos.x, 0.0f, 1.0f, -13.0f, 13.0f); //right now this is hardcoded
float y = RemapRange(decodedPos.y, 0.0f, 1.0f, -13.0f, 13.0f);
float z = RemapRange(decodedPos.z, 0.0f, 1.0f, -13.0f, 13.0f);
//gl_Position = Projection * View * Model * vec4(x, y, z, 1.0f);
gl_Position = vec4(x, y, z, 1.0f);
//gl_Position = vec4(Position, 1.0f);
outNormal = Normal;
}
For the second issue, the shader is the same, but instead I'm using:
coords = vec2(gl_VertexIndex, 0)
I'm also not sure that using vertex index is the best way to go about this, as it seems like most game engines don't have this exposed.
On the CPU side, I encode the textures using the below:
//https://forum.unity.com/threads/re-map-a-number-from-one-range-to-another.119437/
protected float RemapRange(float value, float from1, float to1, float from2, float to2){
return (value - from1) / (to1 - from1) * (to2 - from2) + from2;
}
//https://medium.com/tech-at-wildlife-studios/texture-animation-techniques-1daecb316657
protected Vector2 EncodeFloatRG (float v){
Vector2 kEncodeMul = new Vector2(1.0f, 255.0f);
float kEncodeBit = 1.0f / 255.0f;
Vector2 enc = kEncodeMul * v;
enc.X = fract(enc.X);
enc.Y = fract(enc.Y);
enc.X -= enc.Y * kEncodeBit;
return enc;
}
float fract(float x){
return x - MathF.Floor(x);
}
This is what the loop writing the pixels looks like. There is another one for the second texture, but it's pretty much the same.
posImg.Mutate(c => c.ProcessPixelRowsAsVector4(row =>
{
for (int x = 0; x < row.Length; x++)
{
var obj = meshes[y];
var vertex = obj.Vertices[x];
var pixel = new Vector4();
float X = RemapRange(vertex.Position.X, bounds.Min, bounds.Max, 0.0f, 1.0f);
float Y = RemapRange(vertex.Position.Y, bounds.Min, bounds.Max, 0.0f, 1.0f);
var encodedX = EncodeFloatRG(X);
var encodedY = EncodeFloatRG(Y);
pixel.X = encodedX.X;
pixel.Y = encodedX.Y;
pixel.Z = encodedY.X;
pixel.W = encodedY.Y;
row[x] = pixel;
}
y += 1;
}));
How I am creating and loading the textures in veldrid. As far as the sampler goes, it is a gd.PointSampler. I have tried turning SRGB on and off on the ImageSharpTexture() and using R8_G8_B8_A8_UNorm_SRgb and R8_G8_B8_A8_UNorm and pretty much any combo of those.
var posTex = new Veldrid.ImageSharp.ImageSharpTexture(posPath, false, true);
var normalTex = new Veldrid.ImageSharp.ImageSharpTexture(normalPath, false, true);
var posDeviceTex = posTex.CreateDeviceTexture(gd, gd.ResourceFactory);
var normalDeviceTex = normalTex.CreateDeviceTexture(gd, gd.ResourceFactory);
var posViewDesc = new TextureViewDescription(posDeviceTex, PixelFormat.R8_G8_B8_A8_UNorm_SRgb);
var normalViewDesc = new TextureViewDescription(normalDeviceTex, PixelFormat.R8_G8_B8_A8_UNorm_SRgb);
positionTexture = gd.ResourceFactory.CreateTextureView(posViewDesc);
normalTexture = gd.ResourceFactory.CreateTextureView(normalViewDesc);
EDIT:
I tried hard-coding the value of pixel (0, 0) of the texture in the shader like below. When I do this the result is correct and matches the original vertex position. When reading the pixel values of the texture in the shader and exporting them directly the values are wrong, so I am thinking there is some compression or color space weirdness going on when reading then texture in. Like in the shader the correct value for the pixel at 0,0 should be (0.9490196, 0.03529412, 0.5372549, 0.30588236), but in renderdoc it shows as (0.55492, 0.28516, 0.29102, 0.54314)
outDebug = Projection * View * Model * vec4(Position, 1.0f);
vec2 coords = vec2(0.0, 0.0);
vec4 pos = textureLod(sampler2D(posTex, textureSampler), coords, 0);
vec4 normal = textureLod(sampler2D(normalTex, textureSampler), coords, 0);
pos = vec4(0.9490196, 0.03529412, 0.5372549, 0.30588236);
normal = vec4(0.07058824, 0.96862745, 1, 1);
vec3 decodedPos;
decodedPos.x = DecodeFloatRG(pos.xy);
decodedPos.y = DecodeFloatRG(pos.zw);
decodedPos.z = DecodeFloatRG(normal.xy);
float x = RemapRange(decodedPos.x, 0.0f, 1.0f, -13.0f, 13.0f);
float y = RemapRange(decodedPos.y, 0.0f, 1.0f, -13.0f, 13.0f);
float z = RemapRange(decodedPos.z, 0.0f, 1.0f, -13.0f, 13.0f);
gl_Position = vec4(x, y, z, 1.0f);
Texture 1:
Texture 2:
Google Drive With Textures, Obj, and Metadata
So I figured this out, This code block need to be
var posTex = new Veldrid.ImageSharp.ImageSharpTexture(posPath, false, true);
var normalTex = new Veldrid.ImageSharp.ImageSharpTexture(normalPath, false, true);
var posDeviceTex = posTex.CreateDeviceTexture(gd, gd.ResourceFactory);
var normalDeviceTex = normalTex.CreateDeviceTexture(gd, gd.ResourceFactory);
var posViewDesc = new TextureViewDescription(posDeviceTex, PixelFormat.R8_G8_B8_A8_UNorm);
var normalViewDesc = new TextureViewDescription(normalDeviceTex, PixelFormat.R8_G8_B8_A8_UNorm);
positionTexture = gd.ResourceFactory.CreateTextureView(posViewDesc);
normalTexture = gd.ResourceFactory.CreateTextureView(normalViewDesc);
Instead. While fixing this someone also mentioned I needed to declare my TextureViews before my sampler if I wanted to use the same sampler for both textureViews.
as far as gl_VertexIndex goes I'm looking into how to map the data into a spare uv channel instead as this should always be available in any game engine.

Skybox is covering models

I made solar system with sun in the centre and planets rotating around him. I wanted to add skybox, it is added properly but I cant see my planets.
Before adding skybox:
After adding skybox:
main skybox code:
// skybox VAO
unsigned int skyboxVAO, skyboxVBO;
glGenVertexArrays(1, &skyboxVAO);
glGenBuffers(1, &skyboxVBO);
glBindVertexArray(skyboxVAO);
glBindBuffer(GL_ARRAY_BUFFER, skyboxVBO);
glBufferData(GL_ARRAY_BUFFER, sizeof(skyboxVertices), &skyboxVertices, GL_STATIC_DRAW);
glEnableVertexAttribArray(0);
glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0);
vector<std::string> faces
{
"res/textures/skybox/right.png",
"res/textures/skybox/left.png",
"res/textures/skybox/top.png",
"res/textures/skybox/bot.png",
"res/textures/skybox/front.png",
"res/textures/skybox/back.png"
};
unsigned int cubemapTexture = loadCubemap(faces);
window skybox code:
// draw skybox as last
glDepthFunc(GL_LEQUAL); // change depth function so depth test passes when values are equal to depth buffer's content
skyboxShader.use();
view = glm::mat4(glm::mat3(camera.GetViewMatrix())); // remove translation from the view matrix
skyboxShader.setMat4("view", view);
skyboxShader.setMat4("projection", projection);
// skybox cube
glBindVertexArray(skyboxVAO);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_CUBE_MAP, cubemapTexture);
glDrawArrays(GL_TRIANGLES, 0, 36);
glBindVertexArray(0);
glDepthFunc(GL_LESS); // set depth function back to default
vertex shader and fragment shader:
#version 330 core
layout (location = 0) in vec3 aPos;
out vec3 TexCoords;
uniform mat4 projection;
uniform mat4 view;
void main()
{
TexCoords = aPos;
vec4 pos = projection * view * vec4(aPos, 1.0);
gl_Position = projection * view * vec4(aPos, 1.0);
}
#version 330 core
out vec4 FragColor;
in vec3 TexCoords;
uniform samplerCube skybox;
void main()
{
FragColor = texture(skybox, TexCoords);
}

Shader code to show selected part of image - OpenTk

I want to show selected area from second half of an image (This is the range from 0.5 to 1.0) in my glcontrol. For that I have used two variables rightsliderStartval(any value between 0.5 and 1.0)
and rightsliderEndval(any value between 1.0 and 0.5). I want exactly the selected area between this rightsliderStartval and rightsliderEndval. When trying like below selected area is getting but it get stretched.
decimal RateOfResolution = (decimal)videoSource.VideoResolution.FrameSize.Width / (decimal)videoSource.VideoResolution.FrameSize.Height;
int openGLwidth = (this._Screenwidth / 3) - 40;
int openGLheight = Convert.ToInt32(screenWidthbyThree / RateOfResolution);
glControl.Width = openGLwidth;
glControl.Height = openGLheight;
GL.Viewport(new Rectangle(0, 0, glControl.Width, glControl.Height));
public void CreateShaders()
{
/***********Vert Shader********************/
vertShader = GL.CreateShader(ShaderType.VertexShader);
GL.ShaderSource(vertShader, #"attribute vec3 a_position;
varying vec2 vTexCoordIn;
void main() {
vTexCoordIn=( a_position.xy+1)/2 ;
gl_Position = vec4(a_position,1);
}");
GL.CompileShader(vertShader);
/***********Frag Shader ****************/
fragShader = GL.CreateShader(ShaderType.FragmentShader);
GL.ShaderSource(fragShader, #"precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoordIn;
void main ()
{
vec2 vTexCoord=vec2(vTexCoordIn.x,vTexCoordIn.y);
float rightsliderStartval=0.6;//0.5 to 1.0
float rightsliderEndval=0.8;//1.0 to 0.5
float rightsliderDelta=rightsliderEndval-rightsliderStartval;
if (vTexCoordIn.x < 0.5)
discard;
float u = mix(rightsliderStartval, rightsliderEndval, (vTexCoordIn.x-0.5) * 2.0);
vec4 color = texture2D(sTexture, vec2(u, vTexCoordIn.y));
gl_FragColor = color;
}");
GL.CompileShader(fragShader);
}
In Screenshot, White line represent center of image. I Want to show area between yellow and orange line.
If you want to skip a some parts of the texture, then you can use the discard keyword. This command causes the fragment's output values to be discarded and the fragments are not drawn at all.
If you've a rectangular area and you want to draw only in the 2nd half of the rectangular area, then you've to discard the fragments in the 1st half:
if (vTexCoordIn.x < 0.5)
discard;
If you want to draw the range from rightsliderStartval to rightsliderEndval in the 2nd half of the rectangular area, then you have to map the rang [0.5, 1.0] incoming texture coordinate vTexCoordIn.x to [rightsliderStartval, rightsliderEndval]:
float w = (vTexCoordIn.x-0.5) * 2.0; // [0.5, 1.0] -> [0.0, 1.0]
float u = mix(rightsliderStartval, rightsliderEndval, w); // [0.0, 1.0] -> [0.7, 0.9]
This leads to the fragment shader:
precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoordIn;
void main ()
{
float rightsliderStartval = 0.7;
float rightsliderEndval = 0.9;
if (vTexCoordIn.x < 0.5)
discard;
float u = mix(rightsliderStartval, rightsliderEndval, (vTexCoordIn.x-0.5) * 2.0);
vec4 color = texture2D(sTexture, vec2(u, vTexCoordIn.y));
gl_FragColor = color;
}
If you don't want that the image is stretched then you've 2 possibilities.
Either discard the region from 0.0 to 0.7 and 0.9 to 1.0:
precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoordIn;
void main ()
{
float rightsliderStartval = 0.7;
float rightsliderEndval = 0.9;
if (vTexCoordIn.x < 0.7 || vTexCoordIn.x > 0.9)
discard;
vec4 color = texture2D(sTexture, vTexCoordIn.xy));
gl_FragColor = color;
}
Or scale the image in the y direction, too:
precision highp float;
uniform sampler2D sTexture;
varying vec2 vTexCoordIn;
void main ()
{
float rightsliderStartval = 0.7;
float rightsliderEndval = 0.9;
if (vTexCoordIn.x < 0.5)
discard;
float u = mix(rightsliderStartval, rightsliderEndval, (vTexCoordIn.x-0.5) * 2.0);
float v_scale = (rightsliderEndval - rightsliderStartval) / 0.5;
float v = vTexCoordIn.y * v_scale + (1.0 - v_scale) / 2.0;;
vec4 color = texture2D(sTexture, vec2(u, v));
gl_FragColor = color;
}

C# OpenGL - Trying to get the light to move with the camera using shaders

I am currently building a test scene with shaders.
Here is my Vertext Shader code:
public const string VertexShaderText = #"
#version 130
in vec3 vertexPosition;
in vec3 vertexNormal;
in vec3 vertexTangent;
in vec2 vertexUV;
uniform vec3 light_direction;
out vec3 normal;
out vec2 uv;
out vec3 light;
uniform mat4 projection_matrix;
uniform mat4 view_matrix;
uniform mat4 model_matrix;
uniform bool enable_mapping;
void main(void)
{
normal = normalize((model_matrix * vec4(floor(vertexNormal), 0)).xyz);
uv = vertexUV;
mat3 tbnMatrix = mat3(vertexTangent, cross(vertexTangent, normal), normal);
light = (enable_mapping ? light_direction * tbnMatrix : light_direction);
gl_Position = projection_matrix * view_matrix * model_matrix * vec4(vertexPosition, 1);
}
";
Here is my Fragment Shader:
public const string FragmentShaderText = #"
#version 130
uniform sampler2D colorTexture;
uniform sampler2D normalTexture;
uniform bool enableToggleLighting;
uniform mat4 model_matrix;
uniform bool enable_mapping;
uniform float alpha;
uniform float ambi;
in vec3 normal;
in vec2 uv;
in vec3 light;
out vec4 fragment;
void main(void)
{
vec3 fragmentNormal = texture2D(normalTexture, uv).xyz * 2 - 1;
vec3 selectedNormal = (enable_mapping ? fragmentNormal : normal);
float diffuse = max(dot(selectedNormal, light), 0);
float ambient = ambi;
float lighting = (enableToggleLighting ? max(diffuse, ambient) : 1);
fragment = vec4(lighting * texture2D(colorTexture, uv).xyz, alpha);
}
";
My Project is initialized like this:
CanvasControlSettings.ShaderProgram.Use();
CanvasControlSettings.ShaderProgram["projection_matrix"].SetValue(mainSceneProj);
CanvasControlSettings.ShaderProgram["light_direction"].SetValue(new Vector3(0f, 0f, 1f));
CanvasControlSettings.ShaderProgram["enableToggleLighting"].SetValue(CanvasControlSettings.ToggleLighting);
CanvasControlSettings.ShaderProgram["normalTexture"].SetValue(1);
CanvasControlSettings.ShaderProgram["enable_mapping"].SetValue(CanvasControlSettings.ToggleNormalMapping);
My rotation moves the camera around the object. I want to move the light position along with the camera so that the shading is always visible.
How can I send the Camera Pos to the shader and implement this ?
From the front:
Rotated:
EDIT:
I Updated the Camera Position after rotate like this:
GL.Light(LightName.Light0, LightParameter.Position, new float[] { CanvasControlSettings.Camera.Position.X, CanvasControlSettings.Camera.Position.Y, CanvasControlSettings.Camera.Position.Z, 0.0f } );
And changed the light line to this
light = gl_LightSource[0].position.xyz;
This almost works perfectly accept that the material light color is now way to bright! I would show pictures but it seems I need more rep.
EDIT:
Ok, after following provided links, I found my way. I changed the Vertex Shader light code to:
vec4 lightPos4 = vec4(gl_LightSource[0].position.xyz, 1.0);
vec4 pos4 = model_matrix * vec4(vertexPosition, 1.0);
light = normalize((lightPos4 - pos4).xyz);
Works perfectly now. Thanks
You don't need to send the camera position to your shader. Just change your lights position to the camera position and send your light to your shader.

bumpmapping OpenTK GLcontrol

I have to make shaders in GLSL, I'm using OpenTK, and I'm writing in C# Windows Forms.
And i have some problems with using this shaders in my GLControl
this is my vertex shader
attribute vec3 tangent;
attribute vec3 binormal;
varying vec3 position;
varying vec3 lightvec;
void main()
{
gl_Position = ftransform();
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
position = gl_Vertex.xyz;
mat3 TBNMatrix = mat3(tangent, binormal, gl_Normal);
lightvec = gl_LightSource[0].position.xyz - position;
lightvec *= TBNMatrix;
}
and this is my fragment shader
uniform sampler2D base;
uniform sampler2D normalMap;
uniform vec3 CAMERA_POSITION;
varying vec3 position;
varying vec3 lightvec;
void main()
{
vec3 norm = texture2D(normalMap, gl_TexCoord[0].st).rgb * 2.0 - 1.0;
vec3 baseColor = texture2D(base, gl_TexCoord[0].st).rgb;
float dist = length(lightvec);
vec3 lightVector = normalize(lightvec);
float nxDir = max(0.0, dot(norm, lightVector));
vec4 diffuse = gl_LightSource[0].diffuse * nxDir;
float specularPower = 0.0;
if(nxDir != 0.0)
{
vec3 cameraVector = normalize(CAMERA_POSITION - position.xyz);
vec3 halfVector = normalize(lightVector + cameraVector);
float nxHalf = max(0.0,dot(norm, halfVector));
specularPower = pow(nxHalf, gl_FrontMaterial.shininess);
}
vec4 specular = gl_LightSource[0].specular * specularPower;
gl_FragColor = gl_LightSource[0].ambient +
(diffuse * vec4(baseColor.rgb,1.0)) +
specular;
}
Should i use GL.Uniform()? I don't know how to use this shaders...

Categories

Resources