OpenTK - transparency issue on VBO - c#

I just want to create an .obj file loader, which will load 3D objects. Everything went alright but when i try to load a transparent object i got a problem.
So, here is a picture of the issue. Transparency is working but i don't know why, there are triangles. I try to load different objects (with and without texture also) but i got this issue always.
here is my light settings:
class Light
{
public static void SetLight()
{
GL.Enable(EnableCap.Lighting);
GL.Enable(EnableCap.Light0);
GL.Enable(EnableCap.ColorMaterial);
Vector4 position = new Vector4(0.0f, 200.0f, 300.0f, 1.0f);
Vector4 ambient = new Vector4(0.2f, 0.2f, 0.2f, 1.0f);
Vector4 diffuse = new Vector4(0.7f, 0.7f, 0.7f, 1.0f);
Vector4 specular = new Vector4(1.0f, 1.0f, 1.0f, 1.0f);
GL.Light(LightName.Light0, LightParameter.Position, position);
GL.Light(LightName.Light0, LightParameter.Ambient, ambient);
GL.Light(LightName.Light0, LightParameter.Diffuse, diffuse);
GL.Light(LightName.Light0, LightParameter.Specular, specular);
}
public static void SetMaterial()
{
GL.Color4(1.0f, 1.0f, 1.0f, 0.5f);
Vector4 ambient = new Vector4(0.3f, 0.3f, 0.3f, 0.5f);
Vector4 diffuse = new Vector4(1.0f, 1.0f, 1.0f, 0.5f);
Vector4 specular = new Vector4(0.0f, 0.0f, 0.0f, 0.5f);
GL.Material(MaterialFace.FrontAndBack, MaterialParameter.Ambient, ambient);
GL.Material(MaterialFace.FrontAndBack, MaterialParameter.Diffuse, diffuse);
GL.Material(MaterialFace.FrontAndBack, MaterialParameter.Specular, specular);
GL.Material(MaterialFace.FrontAndBack, MaterialParameter.Shininess, 1.0f);
}
}
and in the main Load function a also have these settings
GL.Enable(EnableCap.Blend);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
//GL.Enable(EnableCap.DepthTest);
//GL.Enable(EnableCap.CullFace);
i know that maybe my answer is not the best, but i don't know what is this issue and i don't even find similar issues on the net.

Your problem is this:
You need to sort the transparent faces from the furthest to the closest before drawing them.

Related

SharpGL Screen / Cursor Coordinate to Model View

Using SharpGL, is it possible to transform the screen cursor position back to a model view location (ie. create a ray-cast function)?
The example I have been working with is similar to the following:
private void OpenGLControl_OpenGLDraw(object sender, SharpGL.OpenGLEventArgs args)
{
OpenGL gl = args.OpenGL;
gl.Clear(OpenGL.GL_COLOR_BUFFER_BIT | OpenGL.GL_DEPTH_BUFFER_BIT);
gl.LoadIdentity();
gl.Translate(0.0f, 0.0f, -6.0f);
gl.Rotate(rotatePyramid, 0.0f, 1.0f, 0.0f);
gl.Begin(OpenGL.GL_TRIANGLES);
gl.Color(1.0f, 0.0f, 0.0f);
gl.Vertex(0.0f, 1.0f, 0.0f);
gl.Color(0.0f, 1.0f, 0.0f);
gl.Vertex(-1.0f, -1.0f, 1.0f);
gl.Color(0.0f, 0.0f, 1.0f);
gl.Vertex(1.0f, -1.0f, 1.0f);
gl.Color(1.0f, 0.0f, 0.0f);
gl.Vertex(0.0f, 1.0f, 0.0f);
gl.Color(0.0f, 0.0f, 1.0f);
gl.Vertex(1.0f, -1.0f, 1.0f);
gl.Color(0.0f, 1.0f, 0.0f);
gl.Vertex(1.0f, -1.0f, -1.0f);
gl.Color(1.0f, 0.0f, 0.0f);
gl.Vertex(0.0f, 1.0f, 0.0f);
gl.Color(0.0f, 1.0f, 0.0f);
gl.Vertex(1.0f, -1.0f, -1.0f);
gl.Color(0.0f, 0.0f, 1.0f);
gl.Vertex(-1.0f, -1.0f, -1.0f);
gl.Color(1.0f, 0.0f, 0.0f);
gl.Vertex(0.0f, 1.0f, 0.0f);
gl.Color(0.0f, 0.0f, 1.0f);
gl.Vertex(-1.0f, -1.0f, -1.0f);
gl.Color(0.0f, 1.0f, 0.0f);
gl.Vertex(-1.0f, -1.0f, 1.0f);
gl.End();
gl.Flush();
}
Using SharpGL, and the approach demonstrated in the above sample, is it possible to implement a ray-cast/mouse selection? Or would I need to delve deeper into the OpenGL workings (Manually manage matrices etc) to achieve this?
Edit: To clarify, I am trying to retrieve the 3D interpretation of the 2D cursor location. Maybe through perspective (w) scaling etc.
You can just use gluUnproject() and pass it the 2D screen space mouse position you retrieved from mouse callback.This function will return the 3D world space position for you which you can use as you want.
I don't know if the previous answers were good for you, but BTW glVertex, glRotate, glBegin and so on are very old GL functions (v1 or v2). Is it your choice to use them, or don't you know about v3 or v4 way of doing 3D things with opengl (VAO, VBO, and so on)?

C# SharpDX how to set texture coordinates correctly?

I am trying to render texture on a cube. However, I do something wrong. I have the texture but coordinates looks like wrong and I do not know how to set it correctly. What am I missing ? I think I must do something about IndexBuffer and UV. However, my mind is very mixed.
The result I get :https://www.youtube.com/watch?v=_fdJAaU81sQ
Mesh.cs
public class Mesh : IDisposable
{
public string File;
public string Name;
public Vector4[] Vertices { get; set; }
public int VerticesCount=0;
public Vector3 Position; //BASED ON PIVOT
public Vector3 PivotPosition; //MOVE MESH BASED ON THIS POSITION
public Vector3 Rotation;
public double Weight;
public SharpDX.Direct3D11.Device d3dDevice;
public SharpDX.Direct3D11.Buffer VerticesBuffer;
public bool IsDisposed=false;
public bool IsSelected = false;
public int Triangles;
public string Texture_DiffuseMap;
public Mesh(string _name, Vector4[] _vertices, string _file, SharpDX.Direct3D11.Device _device, string _Texture_DiffuseMap = "")
{
Vertices = new[]
{
new Vector4(-1.0f, -1.0f, -1.0f, 1.0f), new Vector4(1.0f, 0.0f, 0.0f, 1.0f), // Front
new Vector4(-1.0f, 1.0f, -1.0f, 1.0f), new Vector4(1.0f, 0.0f, 0.0f, 1.0f),
new Vector4( 1.0f, 1.0f, -1.0f, 1.0f), new Vector4(1.0f, 0.0f, 0.0f, 1.0f),
new Vector4(-1.0f, -1.0f, -1.0f, 1.0f), new Vector4(1.0f, 0.0f, 0.0f, 1.0f),
new Vector4( 1.0f, 1.0f, -1.0f, 1.0f), new Vector4(1.0f, 0.0f, 0.0f, 1.0f),
new Vector4( 1.0f, -1.0f, -1.0f, 1.0f), new Vector4(1.0f, 0.0f, 0.0f, 1.0f),
new Vector4(-1.0f, -1.0f, 1.0f, 1.0f), new Vector4(0.0f, 1.0f, 0.0f, 1.0f), // BACK
new Vector4( 1.0f, 1.0f, 1.0f, 1.0f), new Vector4(0.0f, 1.0f, 0.0f, 1.0f),
new Vector4(-1.0f, 1.0f, 1.0f, 1.0f), new Vector4(0.0f, 1.0f, 0.0f, 1.0f),
new Vector4(-1.0f, -1.0f, 1.0f, 1.0f), new Vector4(0.0f, 1.0f, 0.0f, 1.0f),
new Vector4( 1.0f, -1.0f, 1.0f, 1.0f), new Vector4(0.0f, 1.0f, 0.0f, 1.0f),
new Vector4( 1.0f, 1.0f, 1.0f, 1.0f), new Vector4(0.0f, 1.0f, 0.0f, 1.0f),
new Vector4(-1.0f, 1.0f, -1.0f, 1.0f), new Vector4(0.0f, 0.0f, 1.0f, 1.0f), // Top
new Vector4(-1.0f, 1.0f, 1.0f, 1.0f), new Vector4(0.0f, 0.0f, 1.0f, 1.0f),
new Vector4( 1.0f, 1.0f, 1.0f, 1.0f), new Vector4(0.0f, 0.0f, 1.0f, 1.0f),
new Vector4(-1.0f, 1.0f, -1.0f, 1.0f), new Vector4(0.0f, 0.0f, 1.0f, 1.0f),
new Vector4( 1.0f, 1.0f, 1.0f, 1.0f), new Vector4(0.0f, 0.0f, 1.0f, 1.0f),
new Vector4( 1.0f, 1.0f, -1.0f, 1.0f), new Vector4(0.0f, 0.0f, 1.0f, 1.0f),
new Vector4(-1.0f,-1.0f, -1.0f, 1.0f), new Vector4(1.0f, 1.0f, 0.0f, 1.0f), // Bottom
new Vector4( 1.0f,-1.0f, 1.0f, 1.0f), new Vector4(1.0f, 1.0f, 0.0f, 1.0f),
new Vector4(-1.0f,-1.0f, 1.0f, 1.0f), new Vector4(1.0f, 1.0f, 0.0f, 1.0f),
new Vector4(-1.0f,-1.0f, -1.0f, 1.0f), new Vector4(1.0f, 1.0f, 0.0f, 1.0f),
new Vector4( 1.0f,-1.0f, -1.0f, 1.0f), new Vector4(1.0f, 1.0f, 0.0f, 1.0f),
new Vector4( 1.0f,-1.0f, 1.0f, 1.0f), new Vector4(1.0f, 1.0f, 0.0f, 1.0f),
new Vector4(-1.0f, -1.0f, -1.0f, 1.0f), new Vector4(1.0f, 0.0f, 1.0f, 1.0f), // Left
new Vector4(-1.0f, -1.0f, 1.0f, 1.0f), new Vector4(1.0f, 0.0f, 1.0f, 1.0f),
new Vector4(-1.0f, 1.0f, 1.0f, 1.0f), new Vector4(1.0f, 0.0f, 1.0f, 1.0f),
new Vector4(-1.0f, -1.0f, -1.0f, 1.0f), new Vector4(1.0f, 0.0f, 1.0f, 1.0f),
new Vector4(-1.0f, 1.0f, 1.0f, 1.0f), new Vector4(1.0f, 0.0f, 1.0f, 1.0f),
new Vector4(-1.0f, 1.0f, -1.0f, 1.0f), new Vector4(1.0f, 0.0f, 1.0f, 1.0f),
new Vector4( 1.0f, -1.0f, -1.0f, 1.0f), new Vector4(0.0f, 1.0f, 1.0f, 1.0f), // Right
new Vector4( 1.0f, 1.0f, 1.0f, 1.0f), new Vector4(0.0f, 1.0f, 1.0f, 1.0f),
new Vector4( 1.0f, -1.0f, 1.0f, 1.0f), new Vector4(0.0f, 1.0f, 1.0f, 1.0f),
new Vector4( 1.0f, -1.0f, -1.0f, 1.0f), new Vector4(0.0f, 1.0f, 1.0f, 1.0f),
new Vector4( 1.0f, 1.0f, -1.0f, 1.0f), new Vector4(0.0f, 1.0f, 1.0f, 1.0f),
new Vector4( 1.0f, 1.0f, 1.0f, 1.0f), new Vector4(0.0f, 1.0f, 1.0f, 1.0f),
};
Texture_DiffuseMap = _Texture_DiffuseMap;
_vertices = Vertices;
d3dDevice = _device;
VerticesCount = Vertices.Count();
Name = _name;
File = _file;
Meshes.Add(this);
}
// Other functions go here...
public void Dispose()
{
Dispose(true);
GC.SuppressFinalize(this);
IsDisposed = true;
}
protected virtual void Dispose(bool disposing)
{
if (disposing)
{
Dispose();
}
// free native resources if there are any.
VerticesBuffer.Dispose();
IsDisposed = true;
}
public void Render()
{
d3dDevice.ImmediateContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleList;
d3dDevice.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VerticesBuffer, Utilities.SizeOf<Vector4>() * 2, 0));
d3dDevice.ImmediateContext.Draw(VerticesCount,0);
/*d3dDevice.ImmediateContext.InputAssembler.PrimitiveTopology = PrimitiveTopology.LineList;
d3dDevice.ImmediateContext.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(VerticesBuffer, Utilities.SizeOf<Vector4>() * 2, 0));
d3dDevice.ImmediateContext.Draw(VerticesCount, 0);
*/
VerticesBuffer.Dispose();
}
}
Program.cs
[STAThread]
private static void Main()
{
new Thread(new ThreadStart(() =>
{
DisposeCollector DC=new DisposeCollector();
var form = new RenderForm(Globals.Window_Title) { Width = Globals.Window_Size.Width, Height = Globals.Window_Size.Height, AllowUserResizing = false, MinimizeBox = false };
InputHandler IHandler = new InputHandler(form);
SampleDescription SamplerDesc = new SampleDescription(8, 0);
// SwapChain description
var desc = new SwapChainDescription()
{
BufferCount = 2,
ModeDescription = new ModeDescription(form.ClientSize.Width, form.ClientSize.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = form.Handle,
SampleDescription = SamplerDesc,
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput
};
var samplerStateDescription = new SamplerStateDescription
{
AddressU = TextureAddressMode.Wrap,
AddressV = TextureAddressMode.Wrap,
AddressW = TextureAddressMode.Wrap,
Filter = Filter.MinMagMipLinear
};
var rasterizerStateDescription = RasterizerStateDescription.Default();
rasterizerStateDescription.IsFrontCounterClockwise = true;
// Used for debugging dispose object references
Configuration.EnableObjectTracking = true;
// Disable throws on shader compilation errors
Configuration.ThrowOnShaderCompileError = false;
SharpDX.DXGI.Factory factory = new SharpDX.DXGI.Factory1();
SharpDX.DXGI.Adapter adapter = factory.GetAdapter(1);
Adapter[] availableAdapters = factory.Adapters;
foreach(Adapter _adapter in availableAdapters)
{
Console.WriteLine(_adapter.Description.Description);
}
// Create Device and SwapChain
Device device;
SwapChain swapChain;
Device.CreateWithSwapChain(adapter, DeviceCreationFlags.SingleThreaded, desc, out device, out swapChain);
var context = device.ImmediateContext;
//factory.MakeWindowAssociation(form.Handle, WindowAssociationFlags.IgnoreAll);
// Compile Vertex and Pixel shaders
var vertexShaderByteCode = ShaderBytecode.CompileFromFile("MiniCube.hlsl", "VS", "vs_5_0", ShaderFlags.Debug);
var vertexShader = new VertexShader(device, vertexShaderByteCode);
var pixelShaderByteCode = ShaderBytecode.CompileFromFile("MiniCube.hlsl", "PS", "ps_5_0", ShaderFlags.Debug);
var pixelShader = new PixelShader(device, pixelShaderByteCode);
var signature = ShaderSignature.GetInputSignature(vertexShaderByteCode);
// Layout from VertexShader input signature
var layout = new InputLayout(device, signature, new[]
{
new InputElement("POSITION", 0, Format.R32G32B32A32_Float, 0, 0),
new InputElement("NORMAL", 0, Format.R32G32B32A32_Float, 0, 0),
new InputElement("COLOR", 0, Format.R32G32B32A32_Float, 16, 0),
new InputElement("TEXCOORD", 0, Format.R32G32_Float, InputElement.AppendAligned, 0)
});
var samplerState = new SamplerState(device, samplerStateDescription);
Mesh mesh1 = new Mesh("mesh1", new[] { new Vector4(0, 0, 0, 1) }, "", device, "1_Purple.jpg") { IsSelected=true };
Mesh mesh2 = new Mesh("mesh2", new[] { new Vector4(0, 0, 0, 1) }, "", device, "1_GREEN.jpg");
//MenuCreator menu1 = new MenuCreator(device,new[] {new Vector4(0, 0, 0, 0) });
// Create Constant Buffer
var contantBuffer = new Buffer(device, Utilities.SizeOf<Matrix>(), ResourceUsage.Default, BindFlags.ConstantBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
ShaderResourceView textureView;
SharpDX.WIC.ImagingFactory2 ImagingFactory2 = new SharpDX.WIC.ImagingFactory2();
// Prepare All the stages
context.InputAssembler.InputLayout = layout;
context.VertexShader.SetConstantBuffer(0, contantBuffer);
context.VertexShader.Set(vertexShader);
context.PixelShader.Set(pixelShader);
context.PixelShader.SetSampler(0, samplerState);
//context.PixelShader.SetShaderResource(0, textureView);
Matrix proj = Matrix.Identity;
// Use clock
var clock = new Stopwatch();
FPS fps = new FPS();
clock.Start();
// Declare texture for rendering
bool userResized = true;
Texture2D backBuffer = null;
RenderTargetView renderView = null;
Texture2D depthBuffer = null;
DepthStencilView depthView = null;
// Setup handler on resize form
form.UserResized += (sender, args) => userResized = true;
// Setup full screen mode change F5 (Full) F4 (Window)
form.KeyUp += (sender, args) =>
{
if (args.KeyCode == Keys.F5)
swapChain.SetFullscreenState(true, null);
else if (args.KeyCode == Keys.F4)
swapChain.SetFullscreenState(false, null);
else if (args.KeyCode == Keys.Escape)
form.Close();
};
//CREATE DEPTH STENCIL DESCRIPTION
DepthStencilStateDescription depthSSD = new DepthStencilStateDescription();
depthSSD.IsDepthEnabled = false;
depthSSD.DepthComparison = Comparison.LessEqual;
depthSSD.DepthWriteMask = DepthWriteMask.Zero;
DepthStencilState DSState = new DepthStencilState(device, depthSSD);
Camera camera = new Camera();
camera.eye = new Vector3(0, 0, -5);
camera.target = new Vector3(0, 0, 0);
Globals.Render = true;
/*void DrawEmptyCircle(Vector3 startPoint, Vector2 radius, Color color)
{
List<VertexPositionColor> circle = new List<VertexPositionColor>();
float X, Y;
var stepDegree = 0.3f;
for (float angle = 0; angle <= 360; angle += stepDegree)
{
X = startPoint.X + radius.X * (float)Math.Cos((angle));
Y = startPoint.Y + radius.Y * (float)Math.Sin((angle));
Vector3 point = new Vector3(X, Y, 0);
circle.Add(new VertexPositionColor(point, color));
}
}*/
CubeApp.Windows.SystemInformation.Print(CubeApp.Windows.SystemInformation.GetSystemInformation());
HardwareInformation.GetHardwareInformation("Win32_DisplayConfiguration", "Description");
// Main loop
RenderLoop.Run(form, () =>
{
fps.Count();fps.setFormHeader(form);
// Prepare matrices
if (Globals.Render)
{
var view = camera.getView();
// If Form resized
if (userResized)
{
// Dispose all previous allocated resources
Utilities.Dispose(ref backBuffer);
Utilities.Dispose(ref renderView);
Utilities.Dispose(ref depthBuffer);
Utilities.Dispose(ref depthView);
foreach (Mesh _mesh in Meshes.MeshCollection)
{
if (_mesh.IsDisposed == false)
{
Utilities.Dispose(ref _mesh.VerticesBuffer);
}
}
// Resize the backbuffer
swapChain.ResizeBuffers(desc.BufferCount, form.ClientSize.Width, form.ClientSize.Height, Format.Unknown, SwapChainFlags.None);
// Get the backbuffer from the swapchain
backBuffer = Texture2D.FromSwapChain<Texture2D>(swapChain, 0);
// Renderview on the backbuffer
renderView = new RenderTargetView(device, backBuffer);
// Create the depth buffer
depthBuffer = new Texture2D(device, new Texture2DDescription()
{
Format = Format.D32_Float_S8X24_UInt,
ArraySize = 1,
MipLevels = 1,
Width = form.ClientSize.Width,
Height = form.ClientSize.Height,
SampleDescription = SamplerDesc,
Usage = ResourceUsage.Default,
BindFlags = BindFlags.DepthStencil,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None
});
// Create the depth buffer view
depthView = new DepthStencilView(device, depthBuffer);
// Setup targets and viewport for rendering
context.Rasterizer.SetViewport(new Viewport(0, 0, form.ClientSize.Width, form.ClientSize.Height, 0.0f, 1.0f));
//context.OutputMerger.SetDepthStencilState(DSState);
context.OutputMerger.SetTargets(depthView, renderView);
// Setup new projection matrix with correct aspect ratio
proj = Matrix.PerspectiveFovLH((float)Math.PI / 4.0f, form.ClientSize.Width / (float)form.ClientSize.Height, 0.1f, 100.0f);
// We are done resizing
userResized = false;
}
var time = clock.ElapsedMilliseconds / 1000.0f;
var viewProj = Matrix.Multiply(view, proj);
// Clear views
context.ClearDepthStencilView(depthView, DepthStencilClearFlags.Depth, 1.0f, 0);
context.ClearRenderTargetView(renderView, Color.WhiteSmoke);
// Update WorldViewProj Matrix
var worldViewProj = Matrix.RotationX(45) * Matrix.RotationY(0 * 2) * Matrix.RotationZ(0 * .7f) * viewProj;
worldViewProj.Transpose();
context.UpdateSubresource(ref worldViewProj, contantBuffer);
//Update Camera Position
Vector3 _camEye = camera.eye;
Vector3 _camTarget = camera.target;
if (IHandler.KeyW)
{
_camEye.Z+= 0.050f; _camTarget.Z += 0.050f;
}
if (IHandler.KeyS)
{
_camEye.Z -= 0.050f; _camTarget.Z -= 0.050f;
}
if (IHandler.KeyA)
{
_camEye.X -= 0.050f; _camTarget.X -= 0.050f;
}
if (IHandler.KeyD)
{
_camTarget.X += 0.050f;
_camEye.X += 0.050f;
}
if (IHandler.KeyQ)
{
}
camera.eye = _camEye;
camera.target = _camTarget;
camera.updateView();
// Draw the cube
foreach (Mesh __mesh in Meshes.MeshCollection)
{
if ( __mesh.IsSelected )
{
for (int i = 0; i <= __mesh.VerticesCount - 1; i++)
{
if (IHandler.KeyRight) __mesh.Vertices[i].X += 0.050f;
if (IHandler.KeyLeft) __mesh.Vertices[i].X -= 0.050f;
if (IHandler.KeyUp) __mesh.Vertices[i].Y += 0.050f;
if (IHandler.KeyDown) __mesh.Vertices[i].Y -= 0.050f;
}
}
var texture = TextureLoader.CreateTexture2DFromBitmap(device, TextureLoader.LoadBitmap(ImagingFactory2, __mesh.Texture_DiffuseMap));
textureView = new ShaderResourceView(device, texture);
context.PixelShader.SetShaderResource(0, textureView);
texture.Dispose();
textureView.Dispose();
__mesh.VerticesBuffer = SharpDX.Direct3D11.Buffer.Create(device, BindFlags.VertexBuffer, __mesh.Vertices);
//EnvironmentDisplayModes.SetDisplayMode(device, __mesh, EnvironmentDisplayModes.DisplayMode.Standart);
__mesh.Render();
}
// Present!
swapChain.Present(0, PresentFlags.None);
}
});
// Release all resources
foreach (Mesh msh in Meshes.MeshCollection)
{
msh.d3dDevice.Dispose();
msh.VerticesBuffer.Dispose();
}
DC.DisposeAndClear();
signature.Dispose();
vertexShaderByteCode.Dispose();
vertexShader.Dispose();
pixelShaderByteCode.Dispose();
pixelShader.Dispose();
layout.Dispose();
contantBuffer.Dispose();
depthBuffer.Dispose();
depthView.Dispose();
renderView.Dispose();
backBuffer.Dispose();
ImagingFactory2.Dispose();
device.Dispose();
context.Dispose();
swapChain.Dispose();
factory.Dispose();
adapter.Dispose();
DSState.Dispose();
samplerState.Dispose();
DC.Dispose();
form.Dispose();
})).Start();
}
}
MiniCube.hlsl
Texture2D ShaderTexture : register(t0);
SamplerState Sampler : register(s0);
struct VS_IN
{
float4 pos : POSITION;
float3 Normal : NORMAL;
float4 col : COLOR;
float2 TextureUV: TEXCOORD; // Texture UV coordinate
};
struct VS_OUTPUT
{
float4 pos : POSITION0;
float depth : TEXCOORD0;
float2 TextureUV: TEXCOORD;
};
struct PS_IN
{
float4 pos : SV_POSITION;
float4 col : COLOR;
float2 TextureUV: TEXCOORD;
float3 WorldNormal : NORMAL;
float3 WorldPosition : WORLDPOS;
};
float4x4 worldViewProj;
PS_IN VS( VS_IN input )
{
PS_IN output = (PS_IN)0;
output.pos = mul(input.pos, worldViewProj);
input.pos.z= input.pos.z - 0.9f;
input.pos.z *= 10.0f;
output.col = 1.0f-((input.pos.w /* * input.col*/) / (input.pos.z /* *input.col*/));
output.TextureUV = input.TextureUV;
return output;
}
float4 PS( PS_IN input ) : SV_Target
{
return ShaderTexture.Sample(Sampler, input.TextureUV)*input.col;
}
First of all it is common to store the texture coordinates in the vertices.
So the first thing to do for you is to change your structure of your vertices to:
public MyVertex
{
public Vector3 Position;
public Vector3 Normal;
public Vector2 TextureCoord;
}
I do not see the necessity to make the position and normal a vector4, so this should do it. As you use textures there is also no need to use color in the vertex structure.
Next you change your input structure for the shader to the above structure. In the shader but also in your intialization of it.
It is recommended that you set the texture coordinates in the initialization of the mesh. An example would be for a plane:
var vertices = new MyVertex[]
{
new MyVertex(){Position = new Vector3(0.0f, 0.0f, 0.0f),Normal = new Vector3(0.0f, 1.0f, 0.0f), TextureCoord = new Vector2(0.0f, 0.0f)},
new MyVertex(){Position = new Vector3(1.0f, 0.0f, 0.0f),Normal = new Vector3(0.0f, 1.0f, 0.0f), TextureCoord = new Vector2(1.0f, 0.0f)},
new MyVertex(){Position = new Vector3(1.0f, 0.0f, 1.0f),Normal = new Vector3(0.0f, 1.0f, 0.0f), TextureCoord = new Vector2(1.0f, 1.0f)},
new MyVertex(){Position = new Vector3(0.0f, 0.0f, 1.0f),Normal = new Vector3(0.0f, 1.0f, 0.0f), TextureCoord = new Vector2(0.0f, 1.0f)}
};
If you store your vertices like that, it should work like a charm.
Only thing left to do is, to use the input texturecoordinates for your sampling of the passed texture in the shader.
Also there is no need for manipulation of the texturecoordinates in the shader if you want to use simple texture mapping. Otherwise you can look up different types of texture mapping on wikipedia, like spherical-, box- or plane-texturemapping.

Create an Octahedron (Flip Pyramid Upside Down) OpenTK

I am using OpenTK to make an octahedron. I have a square pyramid created and need to translate another beneath the top one and flip it upside down to create the octahedron.
How do I flip the second pyramid upside down?
Here is my code so far:
#region --- Using Directives ---
using System;
using System.Collections.Generic;
using System.Windows.Forms;
using System.Threading;
using System.Drawing;
using OpenTK;
using OpenTK.Graphics;
using OpenTK.Graphics.OpenGL;
using OpenTK.Platform;
#endregion
namespace Octahedron
{
public class Octahedron : GameWindow
{
#region --- Fields ---
const float rotation_speed = 180.0f;
float angle;
#endregion
#region --- Constructor ---
public Octahedron()
: base(800, 600)
{ }
#endregion
#region OnLoad
protected override void OnLoad(EventArgs e)
{
base.OnLoad(e);
GL.ClearColor(Color.MidnightBlue);
GL.Enable(EnableCap.DepthTest);
}
#endregion
#region OnResize
protected override void OnResize(EventArgs e)
{
base.OnResize(e);
GL.Viewport(0, 0, Width, Height);
double aspect_ratio = Width / (double)Height;
OpenTK.Matrix4 perspective = OpenTK.Matrix4.CreatePerspectiveFieldOfView(MathHelper.PiOver4, (float)aspect_ratio, 1, 64);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadMatrix(ref perspective);
}
#endregion
#region OnUpdateFrame
protected override void OnUpdateFrame(FrameEventArgs e)
{
base.OnUpdateFrame(e);
var keyboard = OpenTK.Input.Keyboard.GetState();
if (keyboard[OpenTK.Input.Key.Escape])
{
this.Exit();
return;
}
}
#endregion
#region OnRenderFrame
protected override void OnRenderFrame(FrameEventArgs e)
{
base.OnRenderFrame(e);
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
Matrix4 lookat = Matrix4.LookAt(0, 5, 10, 0, 0, 2, 0, 5, 0);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadMatrix(ref lookat);
angle += rotation_speed * (float)e.Time;
GL.Rotate(angle, 0.0f, 1.0f, 0.0f);
DrawPyramid();
GL.Translate(0.0f, -2.0f, 0.0f);
DrawPyramid();
this.SwapBuffers();
Thread.Sleep(1);
}
#endregion
#region private void DrawPyramid()
private void DrawPyramid()
{
GL.Begin(PrimitiveType.Triangles);
//Side0 (red)
//x, y, z
GL.Color3(1.0f, 0.0f, 0.0f); GL.Vertex3(0.0f, 1.0f, 0.0f);//Top Vertex
//x, y, z
GL.Color3(1.0f, 0.0f, 0.0f); GL.Vertex3(-1.0f, -1.0f, 1.0f);//Bottom Left Vertex
//x, y, z
GL.Color3(1.0f, 0.0f, 0.0f); GL.Vertex3(1.0f, -1.0f, 1.0f);//Bottom Right Vertex
//Side1 (blue)
//x, y, z
GL.Color3(0.0f, 0.0f, 1.0f); GL.Vertex3(0.0f, 1.0f, 0.0f);//Top Vertex
//x, y, z
GL.Color3(0.0f, 0.0f, 1.0f); GL.Vertex3(1.0f, -1.0f, 1.0f);//Bottom Left Vertex
//x, y, z
GL.Color3(0.0f, 0.0f, 1.0f); GL.Vertex3(1.0f, -1.0f, -1.0f);//Bottom Right Vertex
// Side2 (yellow)
//x, y, z
GL.Color3(1.0f, 1.0f, 0.0f); GL.Vertex3(0.0f, 1.0f, 0.0f);
//x, y, z
GL.Color3(1.0f, 1.0f, 0.0f); GL.Vertex3(1.0f, -1.0f, -1.0f);
//x, y, z
GL.Color3(1.0f, 1.0f, 0.0f); GL.Vertex3(-1.0f, -1.0f, -1.0f);
// Side3 (pink)
//x, y, z
GL.Color3(1.0f, 0.0f, 1.0f); GL.Vertex3(0.0f, 1.0f, 0.0f);
//x, y, z
GL.Color3(1.0f, 0.0f, 1.0f); GL.Vertex3(-1.0f, -1.0f, -1.0f);
//x, y, z
GL.Color3(1.0f, 0.0f, 1.0f); GL.Vertex3(-1.0f, -1.0f, 1.0f);
//Side4 (red)
//x, y, z
GL.Color3(1.0f, 1.0f, 1.0f); GL.Vertex3(0.0f, 1.0f, 0.0f);//Top Vertex
//x, y, z
GL.Color3(1.0f, 1.0f, 1.0f); GL.Vertex3(-1.0f, -1.0f, 1.0f);//Bottom Left Vertex
//x, y, z
GL.Color3(1.0f, 1.0f, 1.0f); GL.Vertex3(1.0f, -1.0f, 1.0f);//Bottom Right Vertex
GL.End();
}
#endregion
#region public static void Main()
[STAThread]
public static void Main()
{
using (Octahedron oct = new Octahedron())
{
oct.Run(60.0, 0.0);
}
}
#endregion
}
}
Hint: for simple things which align neatly with a reference plane in x,y,z coordinate space (i.e the planes (x,y), (x,z), (y,z)), flipping something along a certain axis is equivalent to multiplying the particular coordinate by -1. In the more general case you can use a matrix multiplication, using a predefined matrix representing the proper rotation of all coordinates.

Texture in OpenGL show as single color

I'm fiddling with OpenGL with the help of OpenTK but I can't display a simple texture.
My main problem is that my rectangle don't display the texture but rather a color from it.
Here is my display loop :
// render graphics
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.MatrixMode(MatrixMode.Projection);
GL.LoadIdentity();
GL.Ortho(-1.0, 1.0, -1.0, 1.0, 0.0, 4.0);
GL.Enable(EnableCap.Texture2D);
GL.BindTexture(TextureTarget.Texture2D, textureId);
GL.Begin(PrimitiveType.Quads);
GL.Vertex2(-1.0f, 1.0f);
GL.Vertex2(1.0f, 1.0f);
GL.Vertex2(1.0f, -1.0f);
GL.Vertex2(-1.0f,-1.0f);
GL.End();
GL.Disable(EnableCap.Texture2D);
game.SwapBuffers();
You need some texture coordinates. Right now it's just using the default (0, 0) texcoord.
Something like this:
GL.Begin(PrimitiveType.Quads);
GL.TexCoord2(0.0f, 0.0f);
GL.Vertex2(-1.0f, 1.0f);
GL.TexCoord2(1.0f, 0.0f);
GL.Vertex2(1.0f, 1.0f);
GL.TexCoord2(1.0f, 1.0f);
GL.Vertex2(1.0f, -1.0f);
GL.TexCoord2(0.0f, 1.0f);
GL.Vertex2(-1.0f,-1.0f);
GL.End();

OpenGL. Moving light source

in my program I use OpenTK with C#. And, I have a trouble with light source. I can't tie it to the camera. It only stay on fixed position.
Here is code of glControl1_Load():
float[] light_ambient = { 0.2f, 0.2f, 0.2f, 1.0f };
float[] light_diffuse = { 1.0f, 1.0f, 1.0f, 1.0f };
float[] light_specular = { 1.0f, 1.0f, 1.0f, 1.0f };
float[] spotdirection = { 0.0f, 0.0f, -1.0f };
GL.Light(LightName.Light0, LightParameter.Ambient, light_ambient);
GL.Light(LightName.Light0, LightParameter.Diffuse, light_diffuse);
GL.Light(LightName.Light0, LightParameter.Specular, light_specular);
GL.Light(LightName.Light0, LightParameter.ConstantAttenuation, 1.8f);
GL.Light(LightName.Light0, LightParameter.SpotCutoff, 45.0f);
GL.Light(LightName.Light0, LightParameter.SpotDirection, spotdirection);
GL.Light(LightName.Light0, LightParameter.SpotExponent, 1.0f);
GL.LightModel(LightModelParameter.LightModelLocalViewer, 1.0f);
GL.LightModel(LightModelParameter.LightModelTwoSide, 1.0f);
GL.Enable(EnableCap.Light0);
GL.Enable(EnableCap.Lighting);
GL.Enable(EnableCap.DepthTest);
GL.Enable(EnableCap.ColorMaterial);
GL.ShadeModel(ShadingModel.Flat);
glControl1_Paint():
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadMatrix(ref cameramatrix);
GL.Light(LightName.Light0, LightParameter.Position, new float[]{0.0f, 0.0f, 0.0f, 1.0f});
If I'm not wrong, the coordinates light source stored in eye space coord. So, what's wrong?
LoadIdentity instead of your camera matrix for the model view. Your light source will always stay at the same location relative to your camera.
"If the w value is nonzero, the light is positional, and the (x, y, z) values specify the location of the light in homogeneous object coordinates. (See Appendix F.) This location is transformed by the modelview matrix and stored in eye coordinates."
more details here Look for "Example 5-7"

Categories

Resources