c# winform sharpgl, get modelview matrix and openglControl doesen't drawing - c#

I have an interresting thing.
I want to get the rendered objects window coordinates. When i use this in OpenGLDraw event:
var modelview = new double[16];
gl.GetDouble(OpenGL.GL_MODELVIEW_MATRIX, modelview);
Drawing doenes't work.
Environment: VS 2019 community edition, SharpGL.WinForms 3.1.1., c# winform project, framework 4.6.1
OpenGLControl events:
Init, I need only 2D space.
private void RenderPanel_OpenGLInitialized(object sender, EventArgs e)
{
gl = RenderPanel.OpenGL;
gl.Disable(OpenGL.GL_DEPTH_TEST);
gl.Enable(OpenGL.GL_BLEND);
gl.BlendFunc(OpenGL.GL_SRC_ALPHA, OpenGL.GL_ONE_MINUS_SRC_ALPHA);
gl.LoadIdentity();
gl.MatrixMode(OpenGL.GL_PROJECTION);
gl.LoadIdentity();
gl.Viewport(0, 0, RenderPanel.Width, RenderPanel.Height);
gl.Ortho(0, RenderPanel.Width, RenderPanel.Height, 0, 1, -1);
gl.MatrixMode(OpenGL.GL_MODELVIEW);
gl.LoadIdentity();
}
Draw: It's works fine, drawing objects
private void RenderPanel_OpenGLDraw(object sender, RenderEventArgs args)
{
gl.ClearColor(1, 1, 1, 1);
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);
gl.LoadIdentity();
glColor(Color.White);
gl.Enable(OpenGL.GL_TEXTURE_2D);
foreach (ImageItem img in ImageItems.Items)
{
img.Draw();
}
}
I want use this in foreach loop, after the img.Draw() (part of code)
var modelview = new double[16];
var projection = new double[16];
var viewport = new int[4];
var winx = new double[1];
var winy = new double[1];
var winz = new double[1];
gl.GetDouble(OpenGL.GL_MODELVIEW_MATRIX, modelview);
gl.GetDouble(OpenGL.GL_PROJECTION_MATRIX, projection);
gl.GetInteger(OpenGL.GL_VIEWPORT, viewport);
gl.Project(0, 0, 0, modelview, projection, viewport, winx, winy, winz);
try to debug, comment row by row. I see when use gl.GetDouble() drawing gone. I get a correct window coordinates just last object disappear.
.Draw() is simple
gl.BindTexture(OpenGL.GL_TEXTURE_2D, TextureID);
gl.LoadIdentity();
gl.Color(1f, 1f, 1f, pos.a);
gl.Translate(pos.x, pos.y, 0);
gl.Rotate(pos.r, 0, 0);
OpenGLDraw.DrawQuad(pos.w, pos.h);

It's a bug. I planted this 2 rows to some sample projects and drawing method borken(some objects disappear).
I try to lots of thing. Finally changed renderContextType and the last object appear. I think so not a good solution but works.

Related

Vertex data is not drawn after DrawArrays call

Getting started using SharpGL after using other frameworks for OpenGL in C# I decided to start with the most simplest of examples to make sure I understood any syntax changes/niceties of SharpGL.
So I'm attempting to render a single solid coloured triangle which shouldn't be too difficult.
I have two Vertex Buffers, one that stores the points of the Triangle and the other that stores the colours at each of the points. These are built up like so (The points one is the same except it uses the points array):
var colorsVboArray = new uint[1];
openGl.GenBuffers(1, colorsVboArray);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, colorsVboArray[0]);
this.colorsPtr = GCHandle.Alloc(this.colors, GCHandleType.Pinned).AddrOfPinnedObject();
openGl.BufferData(OpenGL.GL_ARRAY_BUFFER, this.colors.Length * Marshal.SizeOf<float>(), this.colorsPtr,
OpenGL.GL_STATIC_DRAW);
These are then set with the correct attrib pointer and enabled:
openGl.VertexAttribPointer(0, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
openGl.EnableVertexAttribArray(0);
But now when I draw using the call:
openGl.DrawArrays(OpenGL.GL_TRIANGLES, 0, 3);
I don't get anything on the screen. No exceptions, but the background is simply blank.
Naturally I presumed that there were compilation issues with my shaders, fortunately SharpGL gives me an easy way of checking and both the Vertex and Fragment shaders are showing as correctly compiled and linked.
Can anyone see why this code does not correctly display any objects, it's basically the same code that I've used before.
Full Source:
internal class Triangle
{
private readonly float[] colors = new float[9];
private readonly ShaderProgram program;
private readonly float[] trianglePoints =
{
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
private IntPtr colorsPtr;
private IntPtr trianglePointsPtr;
private readonly VertexBufferArray vertexBufferArray;
public Triangle(OpenGL openGl, SolidColorBrush solidColorBrush)
{
for (var i = 0; i < this.colors.Length; i+=3)
{
this.colors[i] = solidColorBrush.Color.R / 255.0f;
this.colors[i + 1] = solidColorBrush.Color.G / 255.0f;
this.colors[i + 2] = solidColorBrush.Color.B / 255.0f;
}
this.vertexBufferArray = new VertexBufferArray();
this.vertexBufferArray.Create(openGl);
this.vertexBufferArray.Bind(openGl);
var colorsVboArray = new uint[1];
openGl.GenBuffers(1, colorsVboArray);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, colorsVboArray[0]);
this.colorsPtr = GCHandle.Alloc(this.colors, GCHandleType.Pinned).AddrOfPinnedObject();
openGl.BufferData(OpenGL.GL_ARRAY_BUFFER, this.colors.Length * Marshal.SizeOf<float>(), this.colorsPtr,
OpenGL.GL_STATIC_DRAW);
var triangleVboArray = new uint[1];
openGl.GenBuffers(1, triangleVboArray);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, triangleVboArray[0]);
this.trianglePointsPtr = GCHandle.Alloc(this.trianglePoints, GCHandleType.Pinned).AddrOfPinnedObject();
openGl.BufferData(OpenGL.GL_ARRAY_BUFFER, this.trianglePoints.Length * Marshal.SizeOf<float>(), this.trianglePointsPtr,
OpenGL.GL_STATIC_DRAW);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER,triangleVboArray[0]);
openGl.VertexAttribPointer(0, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, colorsVboArray[0]);
openGl.VertexAttribPointer(1, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
openGl.EnableVertexAttribArray(0);
openGl.EnableVertexAttribArray(1);
var vertexShader = new VertexShader();
vertexShader.CreateInContext(openGl);
vertexShader.SetSource(new StreamReader(
Assembly.GetExecutingAssembly()
.GetManifestResourceStream(#"OpenGLTest.Shaders.Background.SolidColor.SolidColorVertex.glsl"))
.ReadToEnd());
vertexShader.Compile();
var fragmentShader = new FragmentShader();
fragmentShader.CreateInContext(openGl);
fragmentShader.SetSource(new StreamReader(
Assembly.GetExecutingAssembly()
.GetManifestResourceStream(#"OpenGLTest.Shaders.Background.SolidColor.SolidColorFragment.glsl"))
.ReadToEnd());
fragmentShader.Compile();
this.program = new ShaderProgram();
this.program.CreateInContext(openGl);
this.program.AttachShader(vertexShader);
this.program.AttachShader(fragmentShader);
this.program.Link();
}
public void Draw(OpenGL openGl)
{
this.program.Push(openGl, null);
this.vertexBufferArray.Bind(openGl);
openGl.DrawArrays(OpenGL.GL_TRIANGLES, 0, 3);
this.program.Pop(openGl, null);
}
}
Vertex Shader:
#version 430 core
layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec3 vertex_color;
out vec3 color;
void main()
{
color = vertex_color;
gl_Position = vec4(vertex_position, 1.0);
}
Fragment Shader:
#version 430 core
in vec3 colour;
out vec4 frag_colour;
void main ()
{
frag_colour = vec4 (colour, 1.0);
}
Fixed this in the end fairly simply.
I had previously reviewed the SharpGL code and had noted that GL_DEPTH_TEST had been enabled, so I had presumed that the GL_DEPTH_BUFFER_BIT had been correctly cleared and I didn't have to do this.
After reviewing the Render code in SharpGL it turns out that this is not cleared by default but instead the onus is passed to the user to correctly clear the depth buffer.
Therefore I needed a simple call to clear to fix this:
this.openGl.Clear(OpenGL.GL_DEPTH_BUFFER_BIT);

Monogame/XNA/HLSL Shadow Mapping occurs only around player

I've recently begun playing around with Monogame (OpenSource "remake" of XNA). My current task is to write a simple shadow mapping shader. After following Riemer's XNA Tutorial I got the shadows to show up. But interestingly only around where the "player" (camera) is. When I'm outside of the light's radius there's no light showing up around the camera (which is good), but the light from my spotlight should of course still be there. But it's not. The shadows and the bright lighting from my spotlight only appear around player.
As I said, my HLSL code is exactly the same as Riemer's (Except that I wrote SV_POSITION instead of POSITION0 at the VertexShaderInput struct because MonoGame only supports Pixel-/Vertexshader 4.0 and above)
I also have a feeling that my Light-View/-Projection Matrices might somehow be wrong:
Matrix lightsView = Matrix.CreateLookAt(lightPos, new Vector3(lightPos.X, lightPos.Y-1, lightPos.Z), new Vector3(0, 0, 1));
Matrix lightsProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45F), 1f, 1f, 10f);
(I know that my "up" vector is not pointing upwards; that was the only way I got the light to point downwards from it's position, otherwise it would always point to (0 -1 0) even if I wrote it so that the lights coordinates were used and then it would go y -1 from the light - really weird...)
I really hope someone knows what's up here, this issue prevented me from doing anything else in my code for a couple of days now...
PS: Here's my rendering code:
////In my Model-Class
private void DrawModel(string tech)
{
Matrix[] transforms = new Matrix[model.Bones.Count];
model.CopyAbsoluteBoneTransformsTo(transforms);
foreach (var mesh in model.Meshes)
{
foreach (ModelMeshPart part in mesh.MeshParts)
{
part.Effect = Game1.effect;
Matrix tempWorld = ((transforms[mesh.ParentBone.Index] * Matrix.CreateScale(scale)) * rotationMatrix) * Matrix.CreateTranslation(pos);
part.Effect.CurrentTechnique = part.Effect.Techniques[tech];
part.Effect.Parameters["xWorldViewProjection"].SetValue(tempWorld * Game1.camera.View * Game1.camera.Projektion);
part.Effect.Parameters["xTexture"].SetValue(textures[part]);
part.Effect.Parameters["xWorld"].SetValue(tempWorld);
part.Effect.Parameters["xLightPos"].SetValue(Game1.currentLevel.lightPos);
part.Effect.Parameters["xLightPower"].SetValue(Game1.currentLevel.lightPower);
part.Effect.Parameters["xAmbient"].SetValue(Game1.currentLevel.ambientPower);
part.Effect.Parameters["xLightsWorldViewProjection"].SetValue(tempWorld * Game1.currentLevel.lightsViewProjectionMatrix);
part.Effect.Parameters["xShadowMap"].SetValue(Game1.currentLevel.shadowMap);
}
mesh.Draw();
}
}
////In my Level-Class
public void Render()
{
var device = Game1.graphics.GraphicsDevice;
device.SetRenderTarget(renderTarget);
device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);
DrawScene("ShadowMap");
device.SetRenderTarget(null);
shadowMap = (Texture2D)renderTarget;
device.Clear(ClearOptions.Target | ClearOptions.DepthBuffer, Color.Black, 1.0f, 0);
DrawScene("ShadowedScene");
}
private void DrawScene(string tech)
{
foreach (var model in models)
{
model.Render(tech);
}
}
and the light-update code:
private void UpdateLightData()
{
ambientPower = 0.2f;
lightPos = new Vector3(-0.2F, 1.2F, 1.1F);
lightPower = 1.8f;
Matrix lightsView = Matrix.CreateLookAt(lightPos, new Vector3(lightPos.X, lightPos.Y-1, lightPos.Z), new Vector3(0, 0, 1));
Matrix lightsProjection = Matrix.CreatePerspectiveFieldOfView(MathHelper.ToRadians(45F), 1f, 1f, 10f);
lightsViewProjectionMatrix = lightsView * lightsProjection;
}

Rendering only part of a Visual

I have a large DrawingVisual that is added to a Canvas and I want to get the color value of a pixel when I click the mouse. I tried rendering the entire Visaul to a RenderTargetBitmap but got an "Out of memory" exception. then using an example I found here How to render (bitmap) only part of a Visual?
I tried rendering only part of the Visual but I dont get the correct pixel value. currently my code looks like this
private void OnMouseDown(object sender, System.Windows.Input.MouseEventArgs e)
{
// Retreive the coordinates of the mouse button event.
Point pt = e.GetPosition((UIElement)sender);
VisualBrush vb = new VisualBrush(myDrawingVisual);
vb.ViewboxUnits = BrushMappingMode.Absolute;
// create rect from the position in the canvas - starting point of the DrawingVisual
vb.Viewbox = new Rect(point.X - myDrawingVisual.ContentBounds.Left,point.Y - myDrawingVisual.ContentBounds.Top, 1, 1);
vb.ViewportUnits = BrushMappingMode.Absolute;
vb.Viewport = new Rect(point, new Size(1, 1));
System.Windows.Shapes.Rectangle r = new System.Windows.Shapes.Rectangle();
r.Width = 1;
r.Height = 1;
r.Fill = vb;
// Use RenderTargetBitmap to get the visual, in case the image has been transformed.
var renderTargetBitmap = new RenderTargetBitmap(1,
1,
96, 96, PixelFormats.Default);
renderTargetBitmap.Render(r);
var pixels = new byte[4];
renderTargetBitmap.CopyPixels(pixels, 4, 0);
}
You haven't laid out the Rectangle. For details refer to the Layout article on MSDN.
Add the following lines:
r.Measure(new Size(1, 1));
r.Arrange(new Rect(0, 0, 1, 1));
var renderTargetBitmap = ...
renderTargetBitmap.Render(r);

MonoGame: stencil buffer not working

I'm trying to add shadows to my MonoGame-based 2D game. At first I just rendered semitransparent black textures to the frame, but they sometimes overlap and it looks nasty:
I tried to render all shadows into the stencil buffer first, and then use a single semitransparent texture to draw all shadows at once using the stencil buffer. However, it doesn't work as expected:
The two problems are:
The shadows are rendered into the scene
The stencil buffer is seemingly unaffected: the semitransparent black texture covers the entire screen
Here's the initialization code:
StencilCreator = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.Always,
StencilPass = StencilOperation.Replace,
ReferenceStencil = 1
};
StencilRenderer = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.Equal,
ReferenceStencil = 1,
StencilPass = StencilOperation.Keep
};
var projection = Matrix.CreateOrthographicOffCenter(0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height, 0, 0, 1);
var halfPixel = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
AlphaEffect = new AlphaTestEffect(GraphicsDevice)
{
DiffuseColor = Color.White.ToVector3(),
AlphaFunction = CompareFunction.Greater,
ReferenceAlpha = 0,
World = Matrix.Identity,
View = Matrix.Identity,
Projection = halfPixel * projection
};
MaskingTexture = new Texture2D(GameEngine.GraphicsDevice, 1, 1);
MaskingTexture.SetData(new[] { new Color(0f, 0f, 0f, 0.3f) });
ShadowTexture = ResourceCache.Get<Texture2D>("Skins/Common/wall-shadow");
And the following code is the body of my Draw method:
// create stencil
GraphicsDevice.Clear(ClearOptions.Stencil, Color.Black, 0f, 0);
var batch = new SpriteBatch(GraphicsDevice);
batch.Begin(SpriteSortMode.Immediate, null, null, StencilCreator, null, AlphaEffect);
foreach (Vector2 loc in ShadowLocations)
{
newBatch.Draw(
ShadowTexture,
loc,
null,
Color.White,
0f,
Vector2.Zero,
2f,
SpriteEffects.None,
0f
);
}
batch.End();
// render shadow texture through stencil
batch.Begin(SpriteSortMode.Immediate, null, null, StencilRenderer, null);
batch.Draw(MaskingTexture, GraphicsDevice.Viewport.Bounds, Color.White);
batch.End();
What could possibly be the problem? The same code worked fine in my XNA project.
I worked around the issue by using a RenderTarget2D instead of the stencil buffer. Drawing solid black shadows to a RT2D and then drawing the RT2D itself into the scene with a semitransparent color does the trick and is much simplier to implement.

Cannot Render Image using GLK / OpenTK

We are trying to make an app using Xamarin which will have a small animated face in a GLKView on a particular screen. We have looked for solutions for rendering sprites, and the best solution we came up with stems from this solution here. We are having trouble even drawing a simple image in the GLKView, and the error in the output does not really make sense. We are converting this from iOS to Xamarin C# so there are differences between certain calls, but we have tried to keep most pieces in tact.
Here are the parts of the code this is related to:
public class Sprite : NSObject
{
public void Render()
{
Effect.Texture2d0.GLName = TextureInfo.Name;
Effect.Texture2d0.Enabled = true;
Effect.PrepareToDraw();
GL.EnableVertexAttribArray((int)GLKVertexAttrib.Position);
GL.EnableVertexAttribArray((int)GLKVertexAttrib.TexCoord0);
IntPtr ptr = Marshal.AllocHGlobal(Marshal.SizeOf(Quad));
Marshal.StructureToPtr(Quad, ptr, false);
int offset = (int)ptr;
GL.VertexAttribPointer((uint)GLKVertexAttrib.Position, 2, VertexAttribPointerType.Float, false, Marshal.SizeOf(typeof(TexturedVertex)), offset + (int)Marshal.OffsetOf(typeof(TexturedVertex), "geomertryVertex"));
GL.VertexAttribPointer((uint)GLKVertexAttrib.Position, 2, VertexAttribPointerType.Float, false, Marshal.SizeOf(typeof(TexturedVertex)), offset + (int)Marshal.OffsetOf(typeof(TexturedVertex), "textureVertex"));
GL.DrawArrays(BeginMode.TriangleStrip, 0, 4);
Marshal.FreeHGlobal(ptr);
}
}
Sprite.Render() is called in this GLKViewController here:
public class AnimationViewController : GLKViewController
{
GLKView animationView;
EAGLContext context;
Sprite player;
GLKBaseEffect effect;
public override void ViewDidLoad()
{
base.ViewDidLoad();
context = new EAGLContext(EAGLRenderingAPI.OpenGLES2);
if (context == null)
Console.WriteLine("Failed to create ES context...");
animationView = new GLKView(new RectangleF(UIScreen.MainScreen.Bounds.Width * 0.05f,
UIScreen.MainScreen.Bounds.Height * 0.05f,
UIScreen.MainScreen.Bounds.Width * 0.9f,
UIScreen.MainScreen.Bounds.Height * 0.75f), context);
EAGLContext.SetCurrentContext(context);
animationView.DrawInRect += new EventHandler<GLKViewDrawEventArgs>(animationView_DrawInRect);
View.AddSubview(animationView);
effect = new GLKBaseEffect();
Matrix4 projectionMatrix = Matrix4.CreateOrthographicOffCenter(0, animationView.Frame.Width, 0, animationView.Frame.Height, -1024, 1024);
effect.Transform.ProjectionMatrix = projectionMatrix;
player = new Sprite(#"Player.png", effect);
}
void animationView_DrawInRect(object sender, GLKViewDrawEventArgs e)
{
GL.ClearColor(0.98f, 0.98f, 0.98f, 1.0f);
//GL.Clear((uint)(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit));
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
GL.Enable(EnableCap.Blend);
player.Render();
}
}
Links to whole code files:
Sprite Class and related Structs
AnimationViewController Class
Looks like the problem is just a typo in the second call to VertexAttribPointer. The second GLKVertexAttrib.Position should instead be GLKVertexAttrib.TexCoord0:
GL.VertexAttribPointer((uint)GLKVertexAttrib.Position, 2, VertexAttribPointerType.Float, false, Marshal.SizeOf(typeof(TexturedVertex)), offset + (int)Marshal.OffsetOf(typeof(TexturedVertex), "geomertryVertex"));
GL.VertexAttribPointer((uint)GLKVertexAttrib.TexCoord0, 2, VertexAttribPointerType.Float, false, Marshal.SizeOf(typeof(TexturedVertex)), offset + (int)Marshal.OffsetOf(typeof(TexturedVertex), "textureVertex"));

Categories

Resources