I have probably a very simple question in OpenGL. I want to draw different shapes with textures on them.
So I defined a vertex struct, which contains the positions of knots and additional information (normal vector, texture coordinates, reference to texture):
public struct Vertex
{
public Vector3 position;
public Vector3 normal;
public Vector2 texCoord;
public int texid;
public static int SizeInBytes
{
get { return Vector3.SizeInBytes * 2 + Vector2.SizeInBytes + sizeof(int); }
}
public Vertex(Vector3 position, Vector3 normal, Vector2 texCoord, int texid)
{
this.position = position;
this.normal = normal;
this.texCoord = texCoord;
this.texid = texid;
}
}
Then I create 2 vertex arrays (VA_TopBottom_WP and VA_Side_WP) with its vertex buffers (VB_TopBottom_WP and VB_Side_WP). Moreover, I create all textures.
internal void init()
{
// Create Buffer for top and bottom part
GL.GenVertexArrays(1, out VA_TopBottom_WP);
GL.BindVertexArray(VA_TopBottom_WP);
VB_TopBottom_WP = GL.GenBuffer();
// Create Buffer for side part
GL.GenVertexArrays(1, out VA_Side_WP);
GL.BindVertexArray(VA_Side_WP);
VB_Side_WP = GL.GenBuffer();
// Create textures
texes = new List<STL_Tools.Texture2D>();
texes.Add(new STL_Tools.Texture2D());//Default texture
for (int n = 0; n < session.pgm.Count(); n++)
{
texes.Add(new STL_Tools.Texture2D());
}
updateFrame();
}
Then I update the buffer content:
public void updateFrame()
{
// Top and bottom buffer
GL.BindVertexArray(VA_TopBottom_WP);
GL.BindBuffer(BufferTarget.ArrayBuffer, VB_TopBottom_WP);
//Fill vertex structure
vertBuffer_TopBottom = poly.GetTopBottomMeshesVertex();
GL.BufferData<Vertex>(BufferTarget.ArrayBuffer, (IntPtr)(Vertex.SizeInBytes * vertBuffer_TopBottom.Length), vertBuffer_TopBottom, BufferUsageHint.StaticDraw);
GL.EnableVertexAttribArray(0);
GL.VertexAttribPointer(0,3, VertexAttribPointerType.Float,false, Vertex.SizeInBytes, (IntPtr)0);//Position
GL.VertexAttribPointer(1, 2, VertexAttribPointerType.Float, false, Vertex.SizeInBytes, (IntPtr)(Vector3.SizeInBytes*2));//Texture
GL.VertexAttribPointer(2, 3, VertexAttribPointerType.Float, false, Vertex.SizeInBytes, (IntPtr)(Vector3.SizeInBytes));//Normal
// Side buffer
GL.BindVertexArray(VA_Side_WP);
GL.BindBuffer(BufferTarget.ArrayBuffer, VB_Side_WP);
vertBuffer_Side = poly.GetSideVertex();//Fill vertices
GL.BufferData<Vertex>(BufferTarget.ArrayBuffer, (IntPtr)(Vertex.SizeInBytes * vertBuffer_Side.Length), vertBuffer_Side, BufferUsageHint.StaticDraw);
GL.EnableVertexAttribArray(0);
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, Vertex.SizeInBytes, (IntPtr)0);//Position
GL.VertexAttribPointer(1, 2, VertexAttribPointerType.Float, false, Vertex.SizeInBytes, (IntPtr)(Vector3.SizeInBytes * 2));//Texture
GL.VertexAttribPointer(2, 3, VertexAttribPointerType.Float, false, Vertex.SizeInBytes, (IntPtr)(Vector3.SizeInBytes));//Normal
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
}
The Vertex variables contain afterwards reasonable texture coordinates (Vector2 objects in the following). E. g.:
vertBuffer_Side = new Vertex[4]
{new Vertex(new Vector3(0,0,0),new Vector3(1,0,0),new Vector2(0,0),0),
new Vertex(new Vector3(0,-80,0),new Vector3(1,0,0),new Vector2(1,0),0),
new Vertex(new Vector3(0,-80,-10),new Vector3(1,0,0),new Vector2(1,1),0),
new Vertex(new Vector3(0,0,-10),new Vector3(1,0,0),new Vector2(0,0),0)};
Then I draw everything with the following function:
public void renderFrame()
{
GL.PushMatrix();
GL.EnableClientState(ArrayCap.VertexArray);
GL.EnableClientState(ArrayCap.NormalArray);
GL.EnableClientState(ArrayCap.TextureCoordArray);
// Draw Side
GL.BindVertexArray(VA_Side_WP);
GL.Color3(Color.White);
GL.Enable(EnableCap.Texture2D);
for (int n = 0; n < (vertBuffer_Side.Length / 4); n++)
{
//GL.BindTexture(TextureTarget.Texture2D, 0);
GL.BindTexture(TextureTarget.Texture2D, texes[vertBuffer_Side[n*4].texid].ID);
GL.DrawArrays(PrimitiveType.Quads, n*4,4);
}
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
GL.Disable(EnableCap.Texture2D);
//Draw top and bottom
GL.BindVertexArray(VA_TopBottom_WP);
...
GL.PopMatrix();
}
The geometry is drawn correctly. But the texture is not drawn correctly. It is drawn only with one color. It is the color of the far right pixel of the textures.
Probably, the following function does not work correctly:
GL.VertexAttribPointer(1, 2, VertexAttribPointerType.Float, false, Vertex.SizeInBytes, (IntPtr)(Vector3.SizeInBytes * 2));//Texture
Where is my mistake?
The client-side capabilities are states of the Vertex Array Object. Therefor you need to bind the VAO before setting the client-side capabilities. Also, you are not using a shader, but a shader program. Therefore, you must define the VertexPointer, NormalVector and TexCoordPointer instead of specifying generic arrays of vertex attributes. Note that it is possible to specify the vertex attribute 0 instead of the VertexPointer, but this is not possible for the normals and texture coordinates (also see What are the Attribute locations for fixed function pipeline in OpenGL 4.0++ core profile?):
GL.VertexPointer(3, VertexAttribPointerType.Float, false, Vertex.SizeInBytes, (IntPtr)0);//Position
GL.TexCoordPointer(2, VertexAttribPointerType.Float, false, Vertex.SizeInBytes, (IntPtr)(Vector3.SizeInBytes*2));//Texture
GL.NormalPointer(VertexAttribPointerType.Float, false, Vertex.SizeInBytes, (IntPtr)(Vector3.SizeInBytes));//Normal
GL.BindVertexArray(VA_Side_WP);
GL.EnableClientState(ArrayCap.VertexArray);
GL.EnableClientState(ArrayCap.NormalArray);
GL.EnableClientState(ArrayCap.TextureCoordArray);
Related
I'm trying to put together information from this tutorial (Advanced-OpenGL/Instancing) and these answers (How to render using 2 VBO) (New API Clarification) in order to render instances of a square, giving the model matrix for each instance to the shader through an ArrayBuffer. The code I ended with is the following. I sliced and tested any part, and the problem seems to be the model matrix itself is not passed correctly to the shader. I'm using OpenTK in Visual Studio.
For simplicity and debugging, the pool contains just a single square, so I don't still have divisor problems or other funny things I still don't cope with.
My vertex data arrays contain the 3 floats for position and 4 floats for color (stride = 7 time float size).
My results with the attached code are:
if I remove the imodel multiplication in the vertex shader, I get exactly what I expect, a red square (rendered as 2 triangles) with a green border (rendered as a line loop).
if I change the shader and I multiply by the model matrix, I get a red line above the center of the screen which is changing its length over time. The animation makes sense because the simulation is rotating the square, so the angle updates regularly and thus the model matrix calculated changes. Another great result because I'm actually sending dynamic data to the shader. Howvere I can't have my original square rotated and translated.
Any clue?
Thanks a lot.
Vertex Shader:
#version 430 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec4 aCol;
layout (location = 2) in mat4 imodel;
out vec4 fColor;
uniform mat4 view;
uniform mat4 projection;
void main() {
fColor = aCol;
gl_Position = vec4(aPos, 1.0) * imodel * view * projection;
}
Fragment Shader:
#version 430 core
in vec4 fColor;
out vec4 FragColor;
void main() {
FragColor = fColor;
}
OnLoad snippet (initialization):
InstanceVBO = GL.GenBuffer();
GL.GenBuffers(2, VBO);
GL.BindBuffer(BufferTarget.ArrayBuffer, VBO[0]);
GL.BufferData(BufferTarget.ArrayBuffer,
7 * LineLoopVertCount * sizeof(float),
LineLoopVertData, BufferUsageHint.StaticDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, VBO[1]);
GL.BufferData(BufferTarget.ArrayBuffer,
7 * TrianglesVertCount * sizeof(float),
TrianglesVertData, BufferUsageHint.StaticDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
// VAO SETUP
VAO = GL.GenVertexArray();
GL.BindVertexArray(VAO);
// Position
GL.EnableVertexAttribArray(0);
GL.VertexAttribFormat(0, 3, VertexAttribType.Float, false, 0);
GL.VertexArrayAttribBinding(VAO, 0, 0);
// COlor
GL.EnableVertexAttribArray(1);
GL.VertexAttribFormat(1, 4, VertexAttribType.Float, false, 3 * sizeof(float));
GL.VertexArrayAttribBinding(VAO, 1, 0);
int vec4Size = 4;
GL.EnableVertexAttribArray(2);
GL.VertexAttribFormat(2, 4, VertexAttribType.Float, false, 0 * vec4Size * sizeof(float));
GL.VertexAttribFormat(3, 4, VertexAttribType.Float, false, 1 * vec4Size * sizeof(float));
GL.VertexAttribFormat(4, 4, VertexAttribType.Float, false, 2 * vec4Size * sizeof(float));
GL.VertexAttribFormat(5, 4, VertexAttribType.Float, false, 3 * vec4Size * sizeof(float));
GL.VertexAttribDivisor(2, 1);
GL.VertexAttribDivisor(3, 1);
GL.VertexAttribDivisor(4, 1);
GL.VertexAttribDivisor(5, 1);
GL.VertexArrayAttribBinding(VAO, 2, 1);
GL.BindVertexArray(0);
OnFrameRender snippet:
shader.Use();
shader.SetMatrix4("view", cameraViewMatrix);
shader.SetMatrix4("projection", cameraProjectionMatrix);
int mat4Size = 16;
for (int i = 0; i < simulation.poolCount; i++)
{
modelMatrix[i] = Matrix4.CreateFromAxisAngle(
this.RotationAxis, simulation.pool[i].Angle);
modelMatrix[i] = matrix[i] * Matrix4.CreateTranslation(new Vector3(
simulation.pool[i].Position.X,
simulation.pool[i].Position.Y,
0f));
//modelMatrix[i] = Matrix4.Identity;
}
// Copy model matrices into the VBO
// ----------------------------------------
GL.BindBuffer(BufferTarget.ArrayBuffer, InstanceVBO);
GL.BufferData(BufferTarget.ArrayBuffer,
simulation.poolCount * mat4Size * sizeof(float),
modelMatrix, BufferUsageHint.DynamicDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
// ----------------------------------------
GL.BindVertexArray(VAO);
GL.BindVertexBuffer(1, InstanceVBO, IntPtr.Zero, mat4Size * sizeof(float));
GL.BindVertexBuffer(0, VBO[0], IntPtr.Zero, 7 * sizeof(float));
GL.DrawArraysInstanced(PrimitiveType.LineLoop, 0, LineLoopVertCount, simulation.poolCount);
GL.BindVertexBuffer(0, lifeFormVBO[1], IntPtr.Zero, lifeFormTrianglesFStride * sizeof(float));
GL.DrawArraysInstanced(PrimitiveType.Triangles, 0, TrianglesVertCount, simulation.poolCount);
GL.BindVertexArray(0);
There is a lot wrong here.
First, you don't enable any of the attribute arrays after 2, even though your shader says that you're reading 3-5 too. Similarly, you don't set the attribute binding for any of the arrays after 2.
But your bigger problem is that you use glVertexAttribDivisor. That's the wrong function for what you're trying to do. That's the old API for setting the divisor.
In separate attribute format, the divisor is part of the buffer binding, not the vertex attribute. So the divisor needs to be set with glVertexBindingDivisor, and the index it is given is the index you intend to bind the buffer to. Which should be 1.
So presumably, your code should look like:
int vec4Size = 4;
for(int ix = 0; ix < 4; ++ix)
{
int attribIx = 2 + ix;
GL.EnableVertexAttribArray(attribIx);
GL.VertexAttribFormat(attribIx, 4, VertexAttribType.Float, false, ix * vec4Size * sizeof(float));
GL.VertexArrayAttribBinding(VAO, attribIx, 1); //All use the same buffer binding
}
GL.VertexBindingDivisor(1, 1);
In opentk the matrices are sent by columns not by rows as usual so it is necessary to invert the rows by columns, you can do it like this.
private Matrix4[] TransposeMatrix(Matrix4[] inputModel)
{
var outputModel = new Matrix4[inputModel.Length];
for(int i = 0; i < inputModel.Length; i++)
{
outputModel[i].Row0 = inputModel[i].Column0;
outputModel[i].Row1 = inputModel[i].Column1;
outputModel[i].Row2 = inputModel[i].Column2;
outputModel[i].Row3 = inputModel[i].Column3;
}
return outputModel;
}
I've been trying to look into different examples for texturing in OpenTK, however, little to no code examples use the same approach as I desire or a lot of pointless workarounds are required that do not fit my needs. I am simply trying to draw images in OpenTK without their UVs being distorted or malformed. Or rather, how do I malform them to fit the primitive (in this case quad/square) wherever its positioned at in the 2D world?
Consider this image (It's my texture I'm trying to fit inside a quad primitive):
This is the unwanted result. As you can see, it is cropped. I don't care about the wrapping because I plan on fitting the whole image inside the square (No aspect ratio needed). Different wrapping settings did nothing. The image's center is still outside the square.
The transparency and palette is my thing to worry about, I only need help fitting the whole image inside the square!
This is my code for loading textures:
public Texture(Bitmap image)
{
ID = GL.GenTexture();
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, ID);
BitmapData data = image.LockBits(new Rectangle(0, 0, image.Width, image.Height), ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, image.Width, image.Height, 0, OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Linear);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)TextureWrapMode.Repeat);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)TextureWrapMode.Repeat);
GL.GenerateMipmap(GenerateMipmapTarget.Texture2D);
}
Then here's my code for loading the vertex data into the shaders and drawing the primitive:
List<float> vertex_data = {
-.25f, -.25f,
-.25f, .25f,
.25f, .25f,
.25f, -.25f
};
// Load the 2D object
GL.UseProgram(program);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo);
GL.BufferData(BufferTarget.ArrayBuffer, vertex_data.Count * sizeof(float), vertex_data.ToArray(), BufferUsageHint.DynamicDraw);
GL.EnableVertexAttribArray(attr_pos);
GL.VertexAttribPointer(attr_pos, 2, VertexAttribPointerType.Float, false, 2 * sizeof(float), 0);
GL.EnableVertexAttribArray(attr_uv);
GL.VertexAttribPointer(attr_uv, 2, VertexAttribPointerType.Float, false, 2 * sizeof(float), 0);
// ^^ Using the same position data for the UV as well.
// ...
// Drawing the 2D object
GL.UseProgram(program);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo);
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, ID);
GL.DrawArrays(PrimitiveType.Quads, 0, 4);
And the vertex and fragment shaders:
vert:
#version 330 core
layout(location = 1) in vec3 vertex_pos;
layout(location = 2) in vec2 vertex_uv;
out vec2 uv;
uniform mat4 mvp;
void main() {
gl_Position = mvp * vec4(vertex_pos, 1);
uv = vertex_uv;
}
frag:
#version 330 core
in vec2 uv;
out vec4 color;
uniform sampler2D texture0;
void main() {
color = texture(texture0, uv);
}
All you need to do is to specifytexture coordinates in range [0.0, 1.0]:
List<float> vertex_data = {
// x y u v
-.25f, -.25f, 0.0f, 0.0f,
-.25f, .25f, 0.0f, 1.0f,
.25f, .25f, 1.0f, 1.0f,
.25f, -.25f, 1.0f, 0.0f,
};
// Load the 2D object
GL.UseProgram(program);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo);
GL.BufferData(BufferTarget.ArrayBuffer, vertex_data.Count * sizeof(float),
vertex_data.ToArray(), BufferUsageHint.DynamicDraw);
GL.EnableVertexAttribArray(attr_pos);
GL.VertexAttribPointer(attr_pos, 2, VertexAttribPointerType.Float, false,
4 * sizeof(float), 0);
GL.EnableVertexAttribArray(attr_uv);
GL.VertexAttribPointer(attr_uv, 2, VertexAttribPointerType.Float, false,
4 * sizeof(float), 2 * sizeof(float));
The texture coordinate (0, 0) address the bottom left edge of the texture and the texture coordinate (1, 1) address the top right edge of the texture.
You must associate the texture coordinate (0, 0) to the bottom left of the quad and the texture coordinate (1, 1) the top right of the quad.
The 4th argument of GL.VertexAttribPointe (strid) specifies the byte offset between consecutive generic vertex attributes. Since each attribute consists of 4 elements of type flaot (x, y, u, v), this is 4*sizeof(float). The last argument is the byte offset of the attribute. The offset of the vertex coordinates is 0 and and the offset of the texture coordinates is 2*sizeof(float)
Of course you can use 2 separate the attribute arrays. In this case, stride is 2*sizeof(float) for the vertex coordinates and texture coordinates. The offset of the vertex coordinates is 0 and and the offset of the texture coordinates is the size of all the vertex coordinates (8*sizeof(float)):
Use GL.BufferSubData to initialize buffer data:
List<float> vertex_data = {
// x y
-.25f, -.25f,
-.25f, .25f,
.25f, .25f,
.25f, -.25f,
}
List<float> texture_data = {
// u v
0.0f, 0.0f,
0.0f, 1.0f,
1.0f, 1.0f,
1.0f, 0.0f,
};
// Load the 2D object
GL.UseProgram(program);
GL.BindBuffer(BufferTarget.ArrayBuffer, vbo);
GL.BufferData(BufferTarget.ArrayBuffer,
vertex_data.Count * sizeof(float) + texture_data.Count * sizeof(float),
IntPtr.Zero, BufferUsageHint.DynamicDraw);
GL.BufferSubData(BufferTarget.ArrayBuffer, 0,
vertex_data.Count * sizeof(float), vertex_data.ToArray())
GL.BufferSubData(BufferTarget.ArrayBuffer, vertex_data.Count * sizeof(float),
texture_data.Count * sizeof(float), texture_data.ToArray())
GL.EnableVertexAttribArray(attr_pos);
GL.VertexAttribPointer(attr_pos, 2, VertexAttribPointerType.Float, false,
2 * sizeof(float), 0);
GL.EnableVertexAttribArray(attr_uv);
GL.VertexAttribPointer(attr_uv, 2, VertexAttribPointerType.Float, false,
2 * sizeof(float), vertex_data.Count * sizeof(float));
I have 10 sets of data saved in arrays. For example A1 = [1,2 ,3], A2 = [5,6,7]...etc. I am using OpenTk GameWindow to plot it as I have a big data set (10million+ per data set). For now I know how to plot 1 data set as following:
public GraphWIndow (Data Dataset[0]) : base(800, 600, default, "Data Analyzer", GameWindowFlags.Default, default,4,0, default)
{
test= EditVertices(Dataset[0]);
}
private float[] EditVertices(List<float> list)
{
float[] list2d = new float[2*list.Count()];
for(int i =0; i<list.Count(); i++)
{
list2d[2*i]=-1+ i*(((float)2/(float)list.Count()));
list2d[2*i +1] =((float)(list[i]-list.Min())/(float)(list.Max()-list.Min()));
}
return list2d;
}
protected override void OnLoad(EventArgs e)
{
CursorVisible = true;
shader = new Shader("shader.vert", "shader.frag");
shader.Use();
VertexArrayObject = GL.GenVertexArray();
VertexBufferObject = GL.GenBuffer();
GL.BindVertexArray(VertexArrayObject);
GL.BindBuffer(BufferTarget.ArrayBuffer, VertexBufferObject);
GL.BufferData(BufferTarget.ArrayBuffer, test.Count()*sizeof(float), test, BufferUsageHint.StaticDraw);
var vertexLocation = shader.GetAttribLocation("aPosition");
GL.EnableVertexAttribArray(vertexLocation);
GL.VertexAttribPointer( vertexLocation, 2, VertexAttribPointerType.Float, false, 2*sizeof(float), 0);
GL.EnableVertexAttribArray(0);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
GL.BindVertexArray(0);
base.OnLoad(e);
}
protected override void OnRenderFrame(FrameEventArgs e)
{
GL.ClearColor(Color4.DarkBlue);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.BindVertexArray(VertexArrayObject);
GL.DrawArrays(PrimitiveType.LineStrip, 0, test.Count());
GL.PointSize(10);
SwapBuffers();
base.OnRenderFrame(e);
}
shader.vert
#version 400 core
in vec2 aPosition;
void main()
{
gl_Position = vec4(aPosition, 0.0f, 1.0f);
}
I kind of know how I will draw the 10 sets. I will just add an offset to every dataset and combine them in one array(test array) and then pass it to shader.vert. The thing is I don't know what to do with the x coordinates. I don't want to keep adding it to test array, which is shown in EditVertices function, as it will significantly increase its size especially that I am dealing with a huge size of data. Is there a way where I can pass x coordinates separately somehow but at the same time relate it to all data sets? as in they all share the same x coordinates but different Y values. I hope my question is clear.
Is there a way where I can pass x coordinates separately somehow but at the same time relate it to all data sets?
Use 2 attributes of type float, one for the x coordinate and another one for the y coordinate:
#version 400 core
in float aPositionX;
in float aPositionY;
void main()
{
gl_Position = vec4(aPositionX, aPositionY, 0.0, 1.0);
}
var vertexLocationX = shader.GetAttribLocation("aPositionX");
var vertexLocationY = shader.GetAttribLocation("aPositionY");
Now you can use the same X coordinate for all data sets. Create 1 array of vertex attribute data for the X coordinates:
float[] vertexX = new float[2*list.Count()];
for(int i =0; i < list.Count(); i++)
vertexX[i] = (float)(-1 + i * 2/list.Count());
The array for the Y coordinates has to be created separately for each data set
float[] vertexY = new float[2*list.Count()];
for(int i =0; i < list.Count(); i++)
vertexY[i] = (float)((vertexY[i]-list.Min() / (list.Max()-list.Min());
You have to create 2 separate Vertex Buffer Objects:
VertexBufferObjectX = GL.GenBuffer();
GL.BindBuffer(BufferTarget.ArrayBuffer, VertexBufferObjectX);
GL.BufferData(BufferTarget.ArrayBuffer, vertexX.Count()*sizeof(float), vertexX, BufferUsageHint.StaticDraw);
VertexBufferObjectY = GL.GenBuffer();
GL.BindBuffer(BufferTarget.ArrayBuffer, VertexBufferObjectY);
GL.BufferData(BufferTarget.ArrayBuffer, vertexY.Count()*sizeof(float), vertexY, BufferUsageHint.StaticDraw);
And you have to adapt the specification of the Vertex Array Object:
VertexArrayObject = GL.GenVertexArray();
GL.BindVertexArray(VertexArrayObject);
GL.BindBuffer(BufferTarget.ArrayBuffer, VertexBufferObjectX);
GL.VertexAttribPointer(vertexLocationX, 1, VertexAttribPointerType.Float, false, 0, 0);
GL.BindBuffer(BufferTarget.ArrayBuffer, VertexBufferObjectY);
GL.VertexAttribPointer(vertexLocationY, 1, VertexAttribPointerType.Float, false, 0, 0);
GL.EnableVertexAttribArray(vertexLocationX);
GL.EnableVertexAttribArray(vertexLocationY);
GL.BindVertexArray(0);
I'm making 2D games in OpenTK (a C# wrapper for OpenGL 4), and all was well except for jagged edges of polygons and things jumping and stuttering instead of moving smoothly - so I'm trying to add in multisampling to antialias my textures.
My setup has several Cameras which render all their scene objects onto a FrameBufferObject texture (I would like this to be MSAA), which are then all drawn to the screen (no multisampling needed), one on top of the other.
Without multisampling, it worked fine, but now I tried to change all my Texture2D calls to Texture2DMultisample etc but now I get FBO Not Complete errors and it draws wrong. I believe I need to change my shaders too, but I want to solve this first.
The code below references a few classes like Texture that I've made, but I don't think that should impact this, and I don't want to clutter the post - will give mroe details if needed.
I set up the FBO for each camera with:
private void SetUpFBOTex()
{
_frameBufferTexture = new Texture(GL.GenTexture(), Window.W, Window.H);
GL.BindTexture(TextureTarget.Texture2DMultisample, _frameBufferTexture.ID);
GL.TexImage2DMultisample(TextureTargetMultisample.Texture2DMultisample, 0, PixelInternalFormat.Rgba8, Window.W, Window.H, true);
_frameBufferID = GL.GenFramebuffer();
}
and draw with:
public void Render(Matrix4 matrix)
{
GL.Enable(EnableCap.Multisample);
//Bind FBO to be the draw destination and clear it
GL.BindFramebuffer(FramebufferTarget.Framebuffer, _frameBufferID);
GL.FramebufferTexture2D(FramebufferTarget.Framebuffer, FramebufferAttachment.ColorAttachment0, TextureTarget.Texture2DMultisample, _frameBufferTexture.ID, 0);
GL.ClearColor(new Color4(0,0,0,0));
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
//draw stuff here
foreach (Layer l in Layers)
l.Render(Matrix);
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
//Bind the FBO to be drawn
_frameBufferTexture.Bind();
//Translate to camera window position
Matrix4 fbomatrix = matrix * Matrix4.CreateTranslation(_window.x, _window.y, 0) * FBOMatrix;
//Bind shader
shader.Bind(ref fbomatrix, DrawType);
//Some OpenGL setup nonsense, binding vertices and index buffer and telling OpenGL where in the vertex struct things are, pointers &c
GL.BindBuffer(BufferTarget.ArrayBuffer, vertexBuffer);
GL.EnableVertexAttribArray(shader.LocationPosition);
GL.VertexAttribPointer(shader.LocationPosition, 2, VertexAttribPointerType.Float, false, Stride, 0);
if (shader.LocationTexture != -1)
{
GL.EnableVertexAttribArray(shader.LocationTexture);
GL.VertexAttribPointer(shader.LocationTexture, 2, VertexAttribPointerType.Float, false, Stride, 8);
}
GL.EnableVertexAttribArray(shader.LocationColour);
GL.VertexAttribPointer(shader.LocationColour, 4, VertexAttribPointerType.UnsignedByte, true, Stride, 16);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, indexBuffer);
//Draw the damn quad
GL.DrawArrays(DrawType, 0, Vertices.Length);
//Cleanup
GL.DisableVertexAttribArray(shader.LocationPosition);
if (shader.LocationTexture != -1)
GL.DisableVertexAttribArray(shader.LocationTexture);
GL.DisableVertexAttribArray(shader.LocationColour);
}
Ok #Andon gets credit for this - if you write it as an answer I'll mark that as the solution. I was indeed doing antialiasing with 0 samples!
I'm posting the working antialiased drawing to multiple FBOS code for future OpenTK googlers.
private void SetUpFBOTex()
{
_frameBufferTexture = new Texture(GL.GenTexture(), Window.W, Window.H);
GL.BindTexture(TextureTarget.Texture2DMultisample, _frameBufferTexture.ID);
GL.TexImage2DMultisample(TextureTargetMultisample.Texture2DMultisample, 8, PixelInternalFormat.Rgba8, Window.W, Window.H, false);
_frameBufferID = GL.GenFramebuffer();
}
public void Render(Matrix4 matrix)
{
//Bind FBO to be the draw destination and clear it
GL.BindFramebuffer(FramebufferTarget.Framebuffer, _frameBufferID);
GL.FramebufferTexture2D(FramebufferTarget.Framebuffer, FramebufferAttachment.ColorAttachment0, TextureTarget.Texture2DMultisample, _frameBufferTexture.ID, 0);
GL.ClearColor(new Color4(0,0,0,0));
GL.Clear(ClearBufferMask.ColorBufferBit);
//draw stuff here
foreach (Layer l in Layers)
l.Render(Matrix);
//unbind FBO to allow drawing to screen again
GL.BindFramebuffer(FramebufferTarget.Framebuffer, 0);
//Bind the FBO to be drawn
GL.BindTexture(TextureTarget.Texture2DMultisample, _frameBufferTexture.ID);
//Translate to camera window position
Matrix4 fbomatrix = matrix * Matrix4.CreateTranslation(_window.x, _window.y, 0) * FBOMatrix;
//Rotate camera FBO texture
if (_rotationAngle != 0f)
{
fbomatrix = Matrix4.CreateTranslation(RotationCentre.x, RotationCentre.y, 0) * fbomatrix;
fbomatrix = Matrix4.CreateRotationZ(_rotationAngle) * fbomatrix;
fbomatrix = Matrix4.CreateTranslation(-RotationCentre.x, -RotationCentre.y, 0) * fbomatrix;
}
shader.Bind(ref fbomatrix, DrawType);
//Some OpenGL setup nonsense, binding vertices and index buffer and telling OpenGL where in the vertex struct things are, pointers &c
GL.BindBuffer(BufferTarget.ArrayBuffer, vertexBuffer);
GL.EnableVertexAttribArray(shader.LocationPosition);
GL.VertexAttribPointer(shader.LocationPosition, 2, VertexAttribPointerType.Float, false, Stride, 0);
if (shader.LocationTexture != -1)
{
GL.EnableVertexAttribArray(shader.LocationTexture);
GL.VertexAttribPointer(shader.LocationTexture, 2, VertexAttribPointerType.Float, false, Stride, 8);
}
GL.EnableVertexAttribArray(shader.LocationColour);
GL.VertexAttribPointer(shader.LocationColour, 4, VertexAttribPointerType.UnsignedByte, true, Stride, 16);
GL.BindBuffer(BufferTarget.ElementArrayBuffer, indexBuffer);
//Draw the damn quad
GL.DrawArrays(DrawType, 0, Vertices.Length);
//Cleanup
GL.DisableVertexAttribArray(shader.LocationPosition);
if (shader.LocationTexture != -1)
GL.DisableVertexAttribArray(shader.LocationTexture);
GL.DisableVertexAttribArray(shader.LocationColour);
}
I have a wrapper class to control Shader code, here's the bind call:
internal void Bind(ref Matrix4 matrixMVP)
{
//Set this shader as active shader
GL.UseProgram(programID);
//Load position matrix into vertex shaders
GL.UniformMatrix4(LocationMVPMatrix, false, ref matrixMVP);
//Load active texture into fragment shaders
GL.Uniform1(LocationSampler, 0);
}
Fragment shader:
/// <summary>
/// Test for a Multisampled fragment shader - http://www.opentk.com/node/2251
/// </summary>
public const string fragmentShaderTestSrc =
#"
#version 330
uniform sampler2DMS Sampler;
in vec2 InTexture;
in vec4 OutColour;
out vec4 OutFragColor;
int samples = 16;
float div= 1.0/samples;
void main()
{
OutFragColor = vec4(0.0);
ivec2 texcoord = ivec2(textureSize(Sampler) * InTexture); // used to fetch msaa texel location
for (int i=0;i<samples;i++)
{
OutFragColor += texelFetch(Sampler, texcoord, i) * OutColour; // add color samples together
}
OutFragColor*= div; //devide by num of samples to get color avg.
}
";
Vertex shader:
/// <summary>
/// Default vertex shader that only applies specified matrix transformation
/// </summary>
public const string vertexShaderDefaultSrc =
#"
#version 330
uniform mat4 MVPMatrix;
layout (location = 0) in vec2 Position;
layout (location = 1) in vec2 Texture;
layout (location = 2) in vec4 Colour;
out vec2 InVTexture;
out vec4 vFragColorVs;
void main()
{
gl_Position = MVPMatrix * vec4(Position, 0, 1);
InVTexture = Texture;
vFragColorVs = Colour;
}";
I want to draw a texture in my shader but get an exception (see below).
I have following code:
int vertexArray;
//Pointer to Buffers
int vertexBuffer;
int colorBuffer;
int coordBuffer;
int texUniform; //Pointer to Uniform
int texture; //Pointer to Texture
Init
GL.Enable(EnableCap.Texture2D);
texture = LoadPNG("Resources\\Test.png");
//...
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (float)All.Nearest);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (float)All.Nearest);
GL.Hint(HintTarget.PerspectiveCorrectionHint, HintMode.Nicest);
vertexArray = GL.GenVertexArray();
GL.BindVertexArray(vertexArray);
float[] TexCoords = new float[] {
0.0f, 0.0f,
1.0f, 0.0f,
0.0f, 1.0f,
}; //(Array.Length = 2*3)
//Arrays for Vertex (3*3) and Color (4*3)
//GenBuffer, BindBuffer and BufferData for Color and Vertex
coordBuffer = GL.GenBuffer();
GL.BindBuffer(BufferTarget.TextureBuffer, coordBuffer);
GL.BufferData(BufferTarget.TextureBuffer, (IntPtr)(sizeof(float) * TexCoords.Length), TexCoords, BufferUsageHint.StaticDraw);
//Load shader
texUniform = GL.GetUniformLocation(shaderProgram, "tex");
GL.Uniform1(texUniform, 0);
GL.ActiveTexture(TextureUnit.Texture0);
Draw
GL.UseProgram(shaderProgram);
GL.BindTexture(TextureTarget.Texture2D, texture);
GL.EnableVertexAttribArray(0);
GL.BindBuffer(BufferTarget.ArrayBuffer, vertexBuffer);
GL.VertexAttribPointer(0, 3, VertexAttribPointerType.Float, false, 0, 0);
GL.EnableClientState(ArrayCap.VertexArray);
GL.EnableVertexAttribArray(1);
GL.BindBuffer(BufferTarget.ArrayBuffer, colorBuffer);
GL.VertexAttribPointer(1, 4, VertexAttribPointerType.Float, false, 0, 0);
GL.EnableClientState(ArrayCap.ColorArray);
GL.BindBuffer(BufferTarget.TextureBuffer, coordBuffer);
GL.TexCoordPointer(2, TexCoordPointerType.Float, Vector2.SizeInBytes, 0);
GL.EnableClientState(ArrayCap.TextureCoordArray);
GL.DrawArrays(PrimitiveType.Triangles, 0, 6); //<------ Exception
GL.DisableVertexAttribArray(0);
GL.DisableVertexAttribArray(1);
I get a System.AccessViolationException at GL.DrawArrays(...);. I suspect that i haven't loaded a buffer correctly or used a pointer in an incorrect way. The exception is caused by changes i did to get a texture with texture coordinates into the shader, that means vertex and color buffer are working.
I'm not sure what i am doing wrong. I tried different things with the shader but it seems it doesn't matter what i am doing with the shader...
At my last try:
Vertex Shader
#version 330 core
layout(location = 0) in vec3 position;
layout(location = 1) in vec4 color;
layout(location = 2) in vec2 texCoord;
out vec4 vColor;
out vec2 texCoords[];
void main(){
gl_Position = vec4(position, 1.0);
texCoords[0] = texCoord;
vColor = color;
}
Fragment Shader
#version 330 core
in vec4 vColor;
in vec2 texCoords[];
uniform sampler2D tex;
out vec4 fColor;
void main(void)
{
//fColor = vColor;
fColor = texture2D(Texture0, texCoords[0].st);
}
GetShaderInfoLog and GetProgramInfoLog do not return any errors when i comment GL.DrawArrays(...) and run the application.
What is wrong with my code?
Do not enable client state vertex arrays.
Replace the following:
GL.EnableClientState(ArrayCap.VertexArray);
...
GL.EnableClientState(ArrayCap.ColorArray);
...
GL.EnableClientState(ArrayCap.TextureCoordArray);
With:
GL.EnableVertexAttribArray(0);
...
GL.EnableVertexAttribArray(1);
...
GL.EnableVertexAttribArray(2);
At present, you are telling GL to source vertex attributes from glVertexPointer (...), glColorPointer (...) and glTexCoordPointer (...), none of which you actually have setup.
You might be able to get away with enabling the client state: ArrayCap.VertexArray because many drivers alias that to attribute 0, but the others are a recipe for disaster. Nevertheless, until you remove the EnableClientState (...) calls you are going to continue crashing.
Update:
I missed something in your texture coordinate setup...
You also need to replace this line:
GL.TexCoordPointer(2, TexCoordPointerType.Float, Vector2.SizeInBytes, 0);
With this:
GL.VertexAttribPointer(2, 2, VertexAttribPointerType.Float, false, 0, 0);