Vertex data is not drawn after DrawArrays call - c#

Getting started using SharpGL after using other frameworks for OpenGL in C# I decided to start with the most simplest of examples to make sure I understood any syntax changes/niceties of SharpGL.
So I'm attempting to render a single solid coloured triangle which shouldn't be too difficult.
I have two Vertex Buffers, one that stores the points of the Triangle and the other that stores the colours at each of the points. These are built up like so (The points one is the same except it uses the points array):
var colorsVboArray = new uint[1];
openGl.GenBuffers(1, colorsVboArray);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, colorsVboArray[0]);
this.colorsPtr = GCHandle.Alloc(this.colors, GCHandleType.Pinned).AddrOfPinnedObject();
openGl.BufferData(OpenGL.GL_ARRAY_BUFFER, this.colors.Length * Marshal.SizeOf<float>(), this.colorsPtr,
OpenGL.GL_STATIC_DRAW);
These are then set with the correct attrib pointer and enabled:
openGl.VertexAttribPointer(0, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
openGl.EnableVertexAttribArray(0);
But now when I draw using the call:
openGl.DrawArrays(OpenGL.GL_TRIANGLES, 0, 3);
I don't get anything on the screen. No exceptions, but the background is simply blank.
Naturally I presumed that there were compilation issues with my shaders, fortunately SharpGL gives me an easy way of checking and both the Vertex and Fragment shaders are showing as correctly compiled and linked.
Can anyone see why this code does not correctly display any objects, it's basically the same code that I've used before.
Full Source:
internal class Triangle
{
private readonly float[] colors = new float[9];
private readonly ShaderProgram program;
private readonly float[] trianglePoints =
{
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
private IntPtr colorsPtr;
private IntPtr trianglePointsPtr;
private readonly VertexBufferArray vertexBufferArray;
public Triangle(OpenGL openGl, SolidColorBrush solidColorBrush)
{
for (var i = 0; i < this.colors.Length; i+=3)
{
this.colors[i] = solidColorBrush.Color.R / 255.0f;
this.colors[i + 1] = solidColorBrush.Color.G / 255.0f;
this.colors[i + 2] = solidColorBrush.Color.B / 255.0f;
}
this.vertexBufferArray = new VertexBufferArray();
this.vertexBufferArray.Create(openGl);
this.vertexBufferArray.Bind(openGl);
var colorsVboArray = new uint[1];
openGl.GenBuffers(1, colorsVboArray);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, colorsVboArray[0]);
this.colorsPtr = GCHandle.Alloc(this.colors, GCHandleType.Pinned).AddrOfPinnedObject();
openGl.BufferData(OpenGL.GL_ARRAY_BUFFER, this.colors.Length * Marshal.SizeOf<float>(), this.colorsPtr,
OpenGL.GL_STATIC_DRAW);
var triangleVboArray = new uint[1];
openGl.GenBuffers(1, triangleVboArray);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, triangleVboArray[0]);
this.trianglePointsPtr = GCHandle.Alloc(this.trianglePoints, GCHandleType.Pinned).AddrOfPinnedObject();
openGl.BufferData(OpenGL.GL_ARRAY_BUFFER, this.trianglePoints.Length * Marshal.SizeOf<float>(), this.trianglePointsPtr,
OpenGL.GL_STATIC_DRAW);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER,triangleVboArray[0]);
openGl.VertexAttribPointer(0, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, colorsVboArray[0]);
openGl.VertexAttribPointer(1, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
openGl.EnableVertexAttribArray(0);
openGl.EnableVertexAttribArray(1);
var vertexShader = new VertexShader();
vertexShader.CreateInContext(openGl);
vertexShader.SetSource(new StreamReader(
Assembly.GetExecutingAssembly()
.GetManifestResourceStream(#"OpenGLTest.Shaders.Background.SolidColor.SolidColorVertex.glsl"))
.ReadToEnd());
vertexShader.Compile();
var fragmentShader = new FragmentShader();
fragmentShader.CreateInContext(openGl);
fragmentShader.SetSource(new StreamReader(
Assembly.GetExecutingAssembly()
.GetManifestResourceStream(#"OpenGLTest.Shaders.Background.SolidColor.SolidColorFragment.glsl"))
.ReadToEnd());
fragmentShader.Compile();
this.program = new ShaderProgram();
this.program.CreateInContext(openGl);
this.program.AttachShader(vertexShader);
this.program.AttachShader(fragmentShader);
this.program.Link();
}
public void Draw(OpenGL openGl)
{
this.program.Push(openGl, null);
this.vertexBufferArray.Bind(openGl);
openGl.DrawArrays(OpenGL.GL_TRIANGLES, 0, 3);
this.program.Pop(openGl, null);
}
}
Vertex Shader:
#version 430 core
layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec3 vertex_color;
out vec3 color;
void main()
{
color = vertex_color;
gl_Position = vec4(vertex_position, 1.0);
}
Fragment Shader:
#version 430 core
in vec3 colour;
out vec4 frag_colour;
void main ()
{
frag_colour = vec4 (colour, 1.0);
}

Fixed this in the end fairly simply.
I had previously reviewed the SharpGL code and had noted that GL_DEPTH_TEST had been enabled, so I had presumed that the GL_DEPTH_BUFFER_BIT had been correctly cleared and I didn't have to do this.
After reviewing the Render code in SharpGL it turns out that this is not cleared by default but instead the onus is passed to the user to correctly clear the depth buffer.
Therefore I needed a simple call to clear to fix this:
this.openGl.Clear(OpenGL.GL_DEPTH_BUFFER_BIT);

Related

Render to texture fails after resize

In a graphic application i am rendering an image to a texture, then i use that texture on a 3d model.
My problem is the following:
When the application starts everything is fine, but if i resize the view where i do the rendering and i make it bigger, the texture on the 3d model disappear (it doesnt turn black, i think that all values become 1). Making the image smaller doesnt make the texture to disappear, but it is shown incorrectly (not resized).
Here are some explanatory images:
Resize smaller
Not resized
Resize bigger, 1 pixel bigger is enough to make image disappear.
The code that generate the renderview is this:
private void CreateRenderToTexture(Panel view)
{
Texture2DDescription t2d = new Texture2DDescription()
{
Height = view.Height,
Width = view.Width,
Format = Format.R32G32B32A32_Float,
BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget, //| BindFlags.UnorderedAccess,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription(_multisample, 0),
MipLevels = 1,
Usage = ResourceUsage.Default,
ArraySize = 1,
};
_svgTexture = new Texture2D(_device, t2d);
_svgRenderView = new RenderTargetView(_device, _svgTexture);
}
private void RenderSVGToTexture()
{
_camera.SetDefaultProjection();
UpdatePerFrameBuffers();
_dc.OutputMerger.SetTargets(_depthStencil, _svgRenderView);//depth stencil has same dimension as all other buffers
_dc.ClearRenderTargetView(_svgRenderView, new Color4(1.0f, 1.0f, 1.0f));
_dc.ClearDepthStencilView(_depthStencil, DepthStencilClearFlags.Depth | DepthStencilClearFlags.Stencil, 1.0f, 0);
Entity e;
if (RenderingManager.Scene.Entity2DExists("svgimage"))
{
RenderingManager.Scene.GetEntity2D("svgimage", out e);
e.Draw(_dc);
}
_swapChain.Present(0, PresentFlags.None);
}
When rendering the 3D scene i call this function before rendering the model:
private void SetTexture()
{
Entity e;
if (!RenderingManager.Scene.GetEntity3D("model3d", out e))
return;
e.ShaderType = ResourceManager.ShaderType.MAIN_MODEL;
if (ResourceManager.SVGTexture == null )
{
e.ShaderType = ResourceManager.ShaderType.PNUVNOTEX;
return;
}
SamplerDescription a = new SamplerDescription();
a.AddressU = TextureAddressMode.Wrap;
a.AddressV = TextureAddressMode.Wrap;
a.AddressW = TextureAddressMode.Wrap;
a.Filter = Filter.MinPointMagMipLinear;
SamplerState b = SamplerState.FromDescription(ResourceManager.Device, a);
ShaderResourceView svgTexResourceView = new ShaderResourceView(ResourceManager.Device, Texture2D.FromPointer(ResourceManager.SVGTexture.ComPointer));
ResourceManager.Device.ImmediateContext.PixelShader.SetShaderResource(svgTexResourceView, 0);
ResourceManager.Device.ImmediateContext.PixelShader.SetSampler(b, 0);
b.Dispose();
svgTexResourceView.Dispose();
}
Pixel shader:
Texture2D svg : register(t0);
Texture2D errorEstimate : register(t1);
SamplerState ss : register(s0);
float4 main(float4 position : SV_POSITION, float4 color : COLOR, float2 uv : UV) : SV_Target
{
return color * svg.Sample(ss, uv);// *errorEstimate.Sample(ss, uv);
}
I dont understand what i am doing wrong, i hope that you can make me see the mistake that i am doing. Thank you, and sorry for the bad english!
As it (almost) always turn out i was making a very stupid mistake.
I wasn't calling the correct resize function.
Basically in the Renderer2D class there is a DoResize function that does the resize of the 2d only buffers, while in the abstract Renderer class there is the rest of the buffers resizing. The mistake is that in the parent class i was calling the wrong base resize function!
Parent class:
protected override void DoResize(uint width, uint height)
{
if (width == 0 || height == 0)
return;
base.DoResize(width, height); //Here i was calling base.Resize (which was deprecated after a change in the application architecture)
_camera.Width = width;
_camera.Height = height;
_svgTexture.Dispose();
_svgRenderView.Dispose();
CreateRenderToTexture(_viewReference);
ResizePending = false;
}
Base class
protected virtual void DoResize(uint width, uint height)
{
Width = width;
Height = height;
_viewport = new Viewport(0, 0, Width, Height);
_renderTarget.Dispose();
if (_swapChain.ResizeBuffers(2, (int)width, (int)height, Format.Unknown, SwapChainFlags.AllowModeSwitch).IsFailure)
Console.WriteLine("An error occured while resizing buffers.");
using (var resource = Resource.FromSwapChain<Texture2D>(_swapChain, 0))
_renderTarget = new RenderTargetView(_device, resource);
_depthStencil.Dispose();
CreateDepthBuffer();
}
Maybe the code i posted can be of help for someone who is trying to do some render to texture stuff, since i see that there is always people that can't make it work :)

DirectX 10 - Full object flashes, then draws only points

I am using C# and the SharpDX library to do some rendering on a project that I am working on. However, I am only able to get the object to fully draw on the first pass, each subsequent pass it will only draw points. It doesn't matter if I use FillMode.Solid, or FillMode.Wireframe. I've also disabled culling. If I rotate the camera around the object, I still only see points. I have images displaying the issue and key points in my files. Having looked at this code for the past few days, I am completely out of ideas, and perhaps some fresh eyes will be able to figure it out.
Additionally, it only appears to draw the first triangle, not the second one.
Here are the pictures:
Here is my code:
Mesh init
rCom.mesh = new Components.Mesh3D(this.render.Device,
new Components.VertexStructures.Color[] {
new Components.VertexStructures.Color(
new SharpDX.Vector3(-1.0f, -1.0f, 0.0f), new SharpDX.Vector4(1.0f, 0.0f, 0.0f, 1.0f)),
new Components.VertexStructures.Color(
new SharpDX.Vector3(-1.0f, 1.0f, 0.0f), new SharpDX.Vector4(0.0f, 1.0f, 0.0f, 1.0f)),
new Components.VertexStructures.Color(
new SharpDX.Vector3(1.0f, 1.0f, 0.0f), new SharpDX.Vector4(0.0f, 0.0f, 1.0f, 1.0f)),
new Components.VertexStructures.Color(
new SharpDX.Vector3(1.0f, -1.0f, 0.0f), new SharpDX.Vector4(1.0f, 1.0f, 1.0f, 1.0f))
},
new[] {
0, 1, 2, 0, 2, 3
}
);
Vertex structure - color
public static class VertexStructures
{
...
public struct Color
{
public SharpDX.Vector3 pos;
public SharpDX.Vector4 col;
public static int sizeInBytes
{ get { return Vector3.SizeInBytes + Vector4.SizeInBytes; } }
public Color(SharpDX.Vector3 pos, SharpDX.Vector4 col)
{ this.pos = pos; this.col = col; }
}
...
}
Mesh Class
public class Mesh3D
{
private D3D10.Device device;
public D3D10.VertexBufferBinding vertexBuffer;
public D3D10.Buffer indexBuffer;
public int numberOfVertices;
public int numberOfIndices;
public static D3D10.Buffer CreateBuffer<T>(D3D10.Device device, BindFlags bindFlags, params T[] items)
where T : struct
{
var len = Utilities.SizeOf(items);
var stream = new DataStream(len, true, true);
foreach (var item in items)
stream.Write(item);
stream.Position = 0;
var buffer = new D3D10.Buffer(device, stream, len, ResourceUsage.Default,
bindFlags, CpuAccessFlags.None, ResourceOptionFlags.None);
return buffer;
}
...
public Mesh3D(D3D10.Device device, VertexStructures.Color[] vertices, int[] indices = null)
{
this.numberOfVertices = vertices.Length;
this.numberOfIndices = indices.Length;
this.vertexBuffer = new VertexBufferBinding(
CreateBuffer<VertexStructures.Color>(device, BindFlags.VertexBuffer, vertices),
VertexStructures.Color.sizeInBytes, 0);
if (indices != null)
this.indexBuffer = CreateBuffer<int>(device, BindFlags.IndexBuffer, indices);
}
...
}
Render Update Code
public override void Update(double timeDelta = 0.0f)
{
// Clear our backbuffer with the rainbow color
d3d10Device.ClearRenderTargetView(this.renderTargetView, (Color4)SharpDX.Color.CornflowerBlue);
float time = (float)(timeDelta / 1000.0f); // time in milliseconds?
// Do actual drawing here
foreach (RenderComponent com in this._components)
{
// Get any required components
PositionComponent pos = com.entity.GetComponent<PositionComponent>();
// Set up required buffers
var inputAssembler = this.d3d10Device.InputAssembler;
inputAssembler.SetVertexBuffers(0, com.mesh.vertexBuffer);
if (com.mesh.indexBuffer != null)
inputAssembler.SetIndexBuffer(com.mesh.indexBuffer, Format.R32_UInt, 0);
// Set up effect variables
// These matrices should always be defined in the shader, even if they're not used
com.shader.shader.GetVariableByIndex(0).AsMatrix().SetMatrix(this.camera.viewMatrix);
com.shader.shader.GetVariableByIndex(1).AsMatrix().SetMatrix(this.camera.projectionMatrix);
com.shader.shader.GetVariableByIndex(2).AsMatrix().SetMatrix(pos.rotationXMatrix);
com.shader.shader.GetVariableByIndex(3).AsMatrix().SetMatrix(pos.rotationYMatrix);
com.shader.shader.GetVariableByIndex(4).AsMatrix().SetMatrix(pos.rotationZMatrix);
com.shader.shader.GetVariableByIndex(5).AsMatrix().SetMatrix(pos.scalingMatrix);
com.shader.shader.GetVariableByIndex(6).AsMatrix().SetMatrix(pos.translationLocalMatrix);
com.shader.shader.GetVariableByIndex(7).AsMatrix().SetMatrix(pos.translationWorldMatrix);
foreach (var shaderVars in com.shader.vars)
{
// Eventually, we'll use this to set all the required variables needed
}
// Run through each technique, pass, draw
int i = 0, j = 0;
foreach (var techniqueContainer in com.shader.inputLayouts)
{
var technique = com.shader.shader.GetTechniqueByIndex(i);
foreach (var passContainer in techniqueContainer)
{
var pass = technique.GetPassByIndex(j);
inputAssembler.InputLayout = passContainer;
pass.Apply();
this.d3d10Device.Draw(com.mesh.numberOfVertices, 0);
j += 1;
}
i += 1;
}
}
// Present our drawn scene waiting for one vertical sync
this.swapChain.Present(1, PresentFlags.None);
}
lastly, my shader file
matrix View;
matrix Projection;
matrix rotationXMatrix;
matrix rotationYMatrix;
matrix rotationZMatrix;
matrix scalingMatrix;
matrix translationLocalMatrix;
matrix translationWorldMatrix;
struct VS_IN
{
float4 pos : POSITION;
float4 col : COLOR;
};
struct PS_IN
{
float4 pos : SV_POSITION;
float4 col : COLOR;
};
PS_IN VS( VS_IN input )
{
PS_IN output = (PS_IN)0;
input.pos = mul(input.pos, rotationXMatrix);
input.pos = mul(input.pos, rotationYMatrix);
input.pos = mul(input.pos, rotationZMatrix);
input.pos = mul(input.pos, scalingMatrix);
input.pos = mul(input.pos, translationLocalMatrix);
input.pos = mul(input.pos, translationWorldMatrix);
input.pos = mul(input.pos, View);
input.pos = mul(input.pos, Projection);
output.pos = input.pos;
output.col = float4(1.0f, 1.0f, 1.0f, 1.0f);//input.col;
return output;
}
float4 PS( PS_IN input ) : SV_Target
{
return input.col;
}
technique10 Render
{
pass P0
{
SetGeometryShader( 0 );
SetVertexShader( CompileShader( vs_4_0, VS() ) );
SetPixelShader( CompileShader( ps_4_0, PS() ) );
}
}
Please let me know if any further information or code is needed. I really hope someone can help me with this, I can't figure it out.
In the code above you don't appear to be setting the topology (triangles vs lines vs points). See for instance MSDN documentation on Primitive Toplogies, as well as the SharpDX documentation on InputAssemblyStage.PrimitiveToplogy and the SharpDX documentation on the PrimitiveTopology enum.
In your Update() method you probably want to add
inputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
You should probably put this before your foreach for performance reasons since it doesn't change.
You didn't organize your points in a triangle block on update() stage.

MonoGame: stencil buffer not working

I'm trying to add shadows to my MonoGame-based 2D game. At first I just rendered semitransparent black textures to the frame, but they sometimes overlap and it looks nasty:
I tried to render all shadows into the stencil buffer first, and then use a single semitransparent texture to draw all shadows at once using the stencil buffer. However, it doesn't work as expected:
The two problems are:
The shadows are rendered into the scene
The stencil buffer is seemingly unaffected: the semitransparent black texture covers the entire screen
Here's the initialization code:
StencilCreator = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.Always,
StencilPass = StencilOperation.Replace,
ReferenceStencil = 1
};
StencilRenderer = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.Equal,
ReferenceStencil = 1,
StencilPass = StencilOperation.Keep
};
var projection = Matrix.CreateOrthographicOffCenter(0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height, 0, 0, 1);
var halfPixel = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
AlphaEffect = new AlphaTestEffect(GraphicsDevice)
{
DiffuseColor = Color.White.ToVector3(),
AlphaFunction = CompareFunction.Greater,
ReferenceAlpha = 0,
World = Matrix.Identity,
View = Matrix.Identity,
Projection = halfPixel * projection
};
MaskingTexture = new Texture2D(GameEngine.GraphicsDevice, 1, 1);
MaskingTexture.SetData(new[] { new Color(0f, 0f, 0f, 0.3f) });
ShadowTexture = ResourceCache.Get<Texture2D>("Skins/Common/wall-shadow");
And the following code is the body of my Draw method:
// create stencil
GraphicsDevice.Clear(ClearOptions.Stencil, Color.Black, 0f, 0);
var batch = new SpriteBatch(GraphicsDevice);
batch.Begin(SpriteSortMode.Immediate, null, null, StencilCreator, null, AlphaEffect);
foreach (Vector2 loc in ShadowLocations)
{
newBatch.Draw(
ShadowTexture,
loc,
null,
Color.White,
0f,
Vector2.Zero,
2f,
SpriteEffects.None,
0f
);
}
batch.End();
// render shadow texture through stencil
batch.Begin(SpriteSortMode.Immediate, null, null, StencilRenderer, null);
batch.Draw(MaskingTexture, GraphicsDevice.Viewport.Bounds, Color.White);
batch.End();
What could possibly be the problem? The same code worked fine in my XNA project.
I worked around the issue by using a RenderTarget2D instead of the stencil buffer. Drawing solid black shadows to a RT2D and then drawing the RT2D itself into the scene with a semitransparent color does the trick and is much simplier to implement.

Cannot Render Image using GLK / OpenTK

We are trying to make an app using Xamarin which will have a small animated face in a GLKView on a particular screen. We have looked for solutions for rendering sprites, and the best solution we came up with stems from this solution here. We are having trouble even drawing a simple image in the GLKView, and the error in the output does not really make sense. We are converting this from iOS to Xamarin C# so there are differences between certain calls, but we have tried to keep most pieces in tact.
Here are the parts of the code this is related to:
public class Sprite : NSObject
{
public void Render()
{
Effect.Texture2d0.GLName = TextureInfo.Name;
Effect.Texture2d0.Enabled = true;
Effect.PrepareToDraw();
GL.EnableVertexAttribArray((int)GLKVertexAttrib.Position);
GL.EnableVertexAttribArray((int)GLKVertexAttrib.TexCoord0);
IntPtr ptr = Marshal.AllocHGlobal(Marshal.SizeOf(Quad));
Marshal.StructureToPtr(Quad, ptr, false);
int offset = (int)ptr;
GL.VertexAttribPointer((uint)GLKVertexAttrib.Position, 2, VertexAttribPointerType.Float, false, Marshal.SizeOf(typeof(TexturedVertex)), offset + (int)Marshal.OffsetOf(typeof(TexturedVertex), "geomertryVertex"));
GL.VertexAttribPointer((uint)GLKVertexAttrib.Position, 2, VertexAttribPointerType.Float, false, Marshal.SizeOf(typeof(TexturedVertex)), offset + (int)Marshal.OffsetOf(typeof(TexturedVertex), "textureVertex"));
GL.DrawArrays(BeginMode.TriangleStrip, 0, 4);
Marshal.FreeHGlobal(ptr);
}
}
Sprite.Render() is called in this GLKViewController here:
public class AnimationViewController : GLKViewController
{
GLKView animationView;
EAGLContext context;
Sprite player;
GLKBaseEffect effect;
public override void ViewDidLoad()
{
base.ViewDidLoad();
context = new EAGLContext(EAGLRenderingAPI.OpenGLES2);
if (context == null)
Console.WriteLine("Failed to create ES context...");
animationView = new GLKView(new RectangleF(UIScreen.MainScreen.Bounds.Width * 0.05f,
UIScreen.MainScreen.Bounds.Height * 0.05f,
UIScreen.MainScreen.Bounds.Width * 0.9f,
UIScreen.MainScreen.Bounds.Height * 0.75f), context);
EAGLContext.SetCurrentContext(context);
animationView.DrawInRect += new EventHandler<GLKViewDrawEventArgs>(animationView_DrawInRect);
View.AddSubview(animationView);
effect = new GLKBaseEffect();
Matrix4 projectionMatrix = Matrix4.CreateOrthographicOffCenter(0, animationView.Frame.Width, 0, animationView.Frame.Height, -1024, 1024);
effect.Transform.ProjectionMatrix = projectionMatrix;
player = new Sprite(#"Player.png", effect);
}
void animationView_DrawInRect(object sender, GLKViewDrawEventArgs e)
{
GL.ClearColor(0.98f, 0.98f, 0.98f, 1.0f);
//GL.Clear((uint)(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit));
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
GL.Enable(EnableCap.Blend);
player.Render();
}
}
Links to whole code files:
Sprite Class and related Structs
AnimationViewController Class
Looks like the problem is just a typo in the second call to VertexAttribPointer. The second GLKVertexAttrib.Position should instead be GLKVertexAttrib.TexCoord0:
GL.VertexAttribPointer((uint)GLKVertexAttrib.Position, 2, VertexAttribPointerType.Float, false, Marshal.SizeOf(typeof(TexturedVertex)), offset + (int)Marshal.OffsetOf(typeof(TexturedVertex), "geomertryVertex"));
GL.VertexAttribPointer((uint)GLKVertexAttrib.TexCoord0, 2, VertexAttribPointerType.Float, false, Marshal.SizeOf(typeof(TexturedVertex)), offset + (int)Marshal.OffsetOf(typeof(TexturedVertex), "textureVertex"));

OpenGL ES 2.0 / MonoTouch: Texture is colorized red

I'm currently loading a cube-map into my application but it's shown in a red tone.
Edit: The channel problem is also present when using 2D-Textures, it seems the channels are not in the correct order. Is there any way to change the order of the channels using the iOS methods?
That's the code for texture loading:
public TextureCube (Generic3DView device, UIImage right, UIImage left, UIImage top, UIImage bottom, UIImage front, UIImage back)
: base(device)
{
_Device = device;
GL.GenTextures (1, ref _Handle);
GL.BindTexture (TextureType, _Handle);
LoadTexture(All.TextureCubeMapPositiveX, right);
LoadTexture(All.TextureCubeMapNegativeX, left);
LoadTexture(All.TextureCubeMapPositiveY, top);
LoadTexture(All.TextureCubeMapNegativeY, bottom);
LoadTexture(All.TextureCubeMapPositiveZ, front);
LoadTexture(All.TextureCubeMapNegativeZ, back);
GL.TexParameter(All.TextureCubeMap, All.TextureMinFilter, (Int32)All.LinearMipmapLinear);
GL.TexParameter(All.TextureCubeMap, All.TextureMagFilter, (Int32)All.Linear);
GL.GenerateMipmap(All.TextureCubeMap);
}
private void LoadTexture(All usage, UIImage image)
{
GL.TexImage2D(usage, 0, (Int32)All.Rgba, (Int32)image.Size.Width,
(Int32)image.Size.Height, 0, All.Rgba, All.UnsignedByte, RequestImagePixelData(image));
}
protected CGBitmapContext CreateARGBBitmapContext (CGImage inImage)
{
var pixelsWide = inImage.Width;
var pixelsHigh = inImage.Height;
var bitmapBytesPerRow = pixelsWide * 4;
var bitmapByteCount = bitmapBytesPerRow * pixelsHigh;
//Note implicit colorSpace.Dispose()
using (var colorSpace = CGColorSpace.CreateDeviceRGB()) {
//Allocate the bitmap and create context
var bitmapData = Marshal.AllocHGlobal (bitmapByteCount);
if (bitmapData == IntPtr.Zero) {
throw new Exception ("Memory not allocated.");
}
var context = new CGBitmapContext (bitmapData, pixelsWide, pixelsHigh, 8,
bitmapBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst);
if (context == null) {
throw new Exception ("Context not created");
}
return context;
}
}
//Store pixel data as an ARGB Bitmap
protected IntPtr RequestImagePixelData (UIImage inImage)
{
var imageSize = inImage.Size;
CGBitmapContext ctxt = CreateARGBBitmapContext (inImage.CGImage);
var rect = new RectangleF (0.0f, 0.0f, imageSize.Width, imageSize.Height);
ctxt.DrawImage (rect, inImage.CGImage);
var data = ctxt.Data;
return data;
}
I think the channels are inverted, but maybe there is a way to invert the bitmap without some custom code.
This is the image which is rendered( ignore the fancy model in front of it ):
And the expected image:
Edit:
The GL_INVALID_OPERATION issue has been fixed, but it does not solve the issue with the red texture.
The vertex-shader:
attribute vec3 position;
uniform mat4 modelViewMatrix;
varying mediump vec3 texture;
void main()
{
texture = position.xyz;
gl_Position = modelViewMatrix * vec4(position.xyz, 1.0);
}
The fragment-shader:
varying mediump vec3 texture;
uniform samplerCube cubeMap;
void main()
{
mediump vec3 cube = vec3(textureCube(cubeMap, texture));
gl_FragColor = vec4(cube.xyz, 1.0);
}
The problem is your function CreateARGBBitmapContext the line
var context = new CGBitmapContext (bitmapData, pixelsWide, pixelsHigh, 8, bitmapBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedFirst);
If you change
CGImageAlphaInfo.PremultipliedFirst
to
CGImageAlphaInfo.PremultipliedLast
that should fix your code.
After some testing i decided to use the code from "XnaTouch" to load textures, this solves the problem with the red texture.
Of course this has not been the end, because there has been no alpha channel when loading png images. Because this is not acceptable and consumes to much time i decided to write a dds loader ( based on code from http://humus.name/ ).
Did you use the program (with glUseProgram) before using glUniform? Because it doesn't work and would generate that error in this case.
You can also check what are the causes for that GL error in the glUniform man page (at the end).
I see that you are using RGBA for both settings during your TexImage2D step. Judging by how blue your original image and how red your resulting image is, I suggest swapping one of them for BGRA.

Categories

Resources