So I am doing an oil pipeline simulation project which has three windows; the stats window (for pipeline details), the simulation window (for details about how the pipeline will work and the middle window- the 2D simulation of the oil pipeline.
The 2D simulation is in fact, there for aesthetic purposes- the main nitty-gritty is done within the Maths.cs class that I have already programmed. So my question is this:
Using shapes in the OpenTK library, all I can seem to build is triangles. I've inserted the code used to build the triangle but otherwise there seems to be no other shapes. Is there any way I can draw a 1) Circle, 2) Rectangle and 3) Line?
Also, what does the 'BeginMode' class do? I feel this would crack the problem, by using something other than BeginMode I can access other shapes through a different class??
Thanks :-)
private void viewportGL_Paint(object sender, PaintEventArgs e)
{
if (!loaded)
return;
GL.Clear(ClearBufferMask.ColorBufferBit | ClearBufferMask.DepthBufferBit);
GL.MatrixMode(MatrixMode.Modelview);
GL.LoadIdentity();
GL.Translate(x, 0, 0);
GL.Color3(Color.Aqua);
GL.Begin(BeginMode.Triangles);
GL.Vertex2(10, 20);
GL.Vertex2(100, 20);
GL.Vertex2(100, 50);
GL.End();
viewportGL.SwapBuffers();
}
int x = 0;
private void viewportGL_KeyDown(object sender, KeyEventArgs e)
{
if (e.KeyCode == Keys.Space)
x+=4;
viewportGL.Invalidate();
}
So i will now extend to an answer:
float step = (float) Math.PI/10;
GL.Color3(Color.Aqua);
GL.Begin(BeginMode.Triangles);
for (float angle = 0.0f; angle < Math.PI*2 - 0.001f; angle += step) {
GL.Vertex2(0.0f, 0.0f);
GL.Vertex2(Math.Cos(angle), Math.Sin(angle));
GL.Vertex2(Math.Cos(angle - step), Math.Sin(angle - step));
}
GL.End();
BeginMode is an Enum and it basically tells OpenGL about the input primitive type. I think PrimitiveType does the same thing but one could argue that its meaning is more intuitive. For a rectangle you could write:
GL.Begin(PrimitiveType.Quads);
GL.Vertex2(-0.5f, -0.5f);
GL.Vertex2( 0.5f, -0.5f);
GL.Vertex2( 0.5f, 0.5f);
GL.Vertex2(-0.5f, 0.5f);
GL.End();
By the way you may want to take a look at The OpenTK Manual:
http://www.opentk.com/doc/graphics
Related
I implemented a GUI Skin to my script. Although in button options, I implement a texture to onActive background texture, it did not work. Except this, all of the changes are working inside GUI Skin. The texture which I used in onActive texture type is Sprite(2D and UI). You may see how I implement in the onGUI() method in the script to the following code.
//...
public GUISkin mySkin;
//...
private void OnGUI()
{
//...
GUI.skin = mySkin;
turkishButton.onClick.AddListener(() => ButtonClickLanguageEvent.Add(1));
englishButton.onClick.AddListener(() => ButtonClickLanguageEvent.Add(2));
latinButton.onClick.AddListener(() => ButtonClickLanguageEvent.Add(3));
Vector3 scale = new Vector3(Screen.width / nativeSize.x, Screen.height / nativeSize.y, 1.0f);
GUI.matrix = Matrix4x4.TRS(new Vector3(0, 0, 0), Quaternion.identity, scale);
float spacing = 30;
float x = 7 + spacing;
float y = 63;
//...
}
No matter how much I researched the cause of the problem, I could not find it. If it helps, I can throw out any part of the code.
As seen from the pictures, although I added a blue texture to the background of "onActive", nothing changed when I clicked.
I am trying to make my opengl program emit balls from a cube. There are two types of balls - a small blue one and a larger orange one. The balls should fall due to gravity. However at the moment it only seems to emit one of each ball and thats it.
I have tried drawing the balls inside a loop, as follows:
for (int i = 0; i < 100; i = i + 1)
{
Matrix4 mSphereOrange = Matrix4.CreateScale(mOrangeRadius) * Matrix4.CreateTranslation(mOrangePosition);
SetUniformVariables(0.19125f, 0.0735f, 0.054f, 1, 0.647f, 0f, 0.256777f, 0.137622f, 0.086014f, 0.5f);
GL.UniformMatrix4(uModelLocation, true, ref mSphereOrange);
GL.BindVertexArray(mVAO_IDs[2]);
GL.DrawElements(BeginMode.Triangles, mSphereModelUtility.Indices.Length, DrawElementsType.UnsignedInt, 0);
Matrix4 mSphereBlue = Matrix4.CreateScale(mBlueRadius) * Matrix4.CreateTranslation(mBluePosition);
SetUniformVariables(0, 0.1f, 0.06f, 0.0f, 0.50980392f, 0.50980392f, 0.50196078f, 0.50196078f, 0.50196078f, 10f);
GL.UniformMatrix4(uModelLocation, true, ref mSphereBlue);
GL.BindVertexArray(mVAO_IDs[2]);
GL.DrawElements(BeginMode.Triangles, mSphereModelUtility.Indices.Length, DrawElementsType.UnsignedInt, 0);
}
Can anyone see why this may not be working? Or suggest a better way to create an emitter?
Any help would be much appreciated,
Lucy
I'm sure that the problem is related with for loop. You create simultaneously 100 orange spheres and 100 blue spheres at the same position.
I am working on a 2D menu for a game using OpenTK in C#.
At the moment, the menu is separated in 3 different texture quads, or 'layers', which looks like this.
Layer 1: The base appearance of the buttons.
Layer 2: The appearance of the 'continue' button when the mouse hovers over it.
Layer 3: A 'gear' which holds the buttons, as well as their text/name.
Each of these layers consists of a semi-transparent (32-bit .png) texture bound to a Quad.
When drawing only layers 1 & 3, the textures seem to work properly, but when I want to show Layer 2 as well, layer 3 disappears from my screen, as seen here. In this image, only my base buttons (layer 1) and highlighted 'continue' button (layer 2) are drawn.
I believe this is an issue with my blend function, and the fact that I am drawing more than 2 sprites. I am relatively new to OpenTK/OpenGL, and I was hoping someone here could help me fix this problem. I will add some of my code below:
When the game starts, I set up some GL properties:
GL.Enable(EnableCap.DepthTest);
GL.Enable(EnableCap.CullFace);
GL.Enable(EnableCap.Texture2D);
GL.Enable(EnableCap.Blend); //I migh t be missing something here, like textEnv?
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
This part of the code draws my 2D elements. I took it from a tutorial on OpenTK.
public void DrawHUD()
{
// Clear only the depth buffer, so that everything
// we draw for the HUD will appear in front of the
// world objects.
GL.Clear(ClearBufferMask.DepthBufferBit);
// Reset the ModelView matrix so the following
// objects are not affected by the camera position
GL.LoadIdentity();
////draw 3d hud elements (weapon in the main game)
// Disable lighting for the HUD graphics
// GL.Disable(EnableCap.Lighting);
// Save the current perspective projection
GL.MatrixMode(MatrixMode.Projection);
GL.PushMatrix();
// Switch to orthogonal view
GL.LoadIdentity();
GL.Ortho(0, Width, 0, Height, -1, 1); // Bottom-left corner pixel has coordinate (0, 0)
// Go back to working with the ModelView
GL.MatrixMode(MatrixMode.Modelview);
//// Draw the HUD elements
GameEngine.Draw2D();
// Switch back to perspective view
GL.MatrixMode(MatrixMode.Projection);
GL.PopMatrix();
GL.MatrixMode(MatrixMode.Modelview);
// Turn lighting back on
// GL.Enable(EnableCap.Lighting);
}
I add the menu 'layers' to an arraylist, which I then use to draw using Graphic, a class which holds the position/texture info of my 'layer'
public virtual void Draw2D()
{
Vector2 pos;
int Z = 0; //to fight z-fighting? this fixed an issue where drawing a 2nd sprite would also make the 1st one dissapear partially
foreach (Graphic G in _2DList)
{
if (G.visible())
{
pos = G.position();
Renderer.DrawHUDSprite(pos.X, pos.Y, Z, G.W(), G.H(), G.Texture());
Z++;
}
}
}
Finally, my draw function for hudsprites looks like this:
public static void DrawHUDSprite(float x, float y, float z, float width, float height, int textID)
{
// Save the ModelView matrix
GL.PushMatrix();
// Move to the correct location on screen
GL.Translate(x, y, z);
GL.BindTexture(TextureTarget.Texture2D, textID);
// Setup for drawing the texture
GL.Color3(Color.White);
// Draw a flat rectangle
GL.Begin(PrimitiveType.Quads);
GL.TexCoord2(0.0f, 1.0f); GL.Vertex3(0, 0, 0);
GL.TexCoord2(1.0f, 1.0f); GL.Vertex3(width, 0, 0);
GL.TexCoord2(1.0f, 0.0f); GL.Vertex3(width, height, 0);
GL.TexCoord2(0.0f, 0.0f); GL.Vertex3(0, height, 0);
GL.End();
// Restore the ModelView matrix
GL.BindTexture(TextureTarget.Texture2D, 0);
GL.PopMatrix();
}
If this has anything to do with the way I load my textures, I will add the code for this as well.
I managed to solve my issue on my own: The problem lies with my draw2D() function, where I use a Z coordinate to prevent clipping issues. The Z-coordinate when drawing in 2D could only be in the range of [0..1].
My earlier solution which increments Z by 1 (Z++) would cause issues with more than 2 textures/Graphics (Z>1, meaning the quad is not displayed). The fixed version looks like this:
public virtual void Draw2D()
{
if (_2DList.Count > 0) { //pervents dividing by 0, and skips memory allocation if we have no Graphics
Vector2 pos;
float Z = 0; //to fight z-fighting, the sprites ar drawn in the order they were added.
float step = 1.0f / _2DList.Count; //limiting the Z between [0..1]
foreach (Graphic G in _2DList)
{
if (G.visible())
{
pos = G.position();
Renderer.DrawHUDSprite(pos.X, pos.Y, Z, G.W(), G.H(), G.Texture());
Z += step; //with z starting at 0, it will never be 1.
}
}
}
}
I'm using OpenTK and C#, I have defined a plane in 3D space as follows:
GL.Begin(BeginMode.Quads);
GL.Color3(Color.Magenta);
GL.Vertex3(-100.0f, -25.0f, -150.0f);
GL.Vertex3(-100.0f, -25.0f, 150.0f);
GL.Vertex3( 200.0f, -25.0f, 100.0f);
GL.Vertex3( 200.0f, -25.0f, -100.0f);
GL.End();
Can anyone please help me to make the plane transparent?
So you want something like this?
There are a lot of things to take care of to get there.
It all starts with a Color object that contains an alpha value<255. For example Color.FromArgb(85, Color.Turquoise) for the sphere below.
The main render class, sets up the camera view and renders all the lights, and then renders all the objects in the scene:
public void RenderOnView(GLControl control)
{
control.MakeCurrent();
var camera=views[control];
GL.Clear(ClearBufferMask.ColorBufferBit|ClearBufferMask.DepthBufferBit);
GL.Disable(EnableCap.CullFace);
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
camera.LookThrough();
if (EnableLights)
{
GL.LightModel(LightModelParameter.LightModelAmbient, new[] { 0.2f, 0.2f, 0.2f, 1f });
GL.LightModel(LightModelParameter.LightModelLocalViewer, 1);
GL.Enable(EnableCap.Lighting);
foreach (var light in lights)
{
light.Render();
}
}
else
{
GL.Disable(EnableCap.Lighting);
GL.ShadeModel(ShadingModel.Flat);
}
GL.Enable(EnableCap.LineSmooth); // This is Optional
GL.Enable(EnableCap.Normalize); // These is critical to have
GL.Enable(EnableCap.RescaleNormal);
for (int i=0; i<objects.Count; i++)
{
GL.PushMatrix();
objects[i].Render();
GL.PopMatrix();
}
control.SwapBuffers();
}
Then each object has a base rendering code Render(), which calls more specialized code Draw()
public void Render()
{
if (Shading==ShadingModel.Smooth)
{
GL.Enable(EnableCap.ColorMaterial);
GL.ColorMaterial(MaterialFace.FrontAndBack, ColorMaterialParameter.AmbientAndDiffuse);
GL.Material(MaterialFace.FrontAndBack, MaterialParameter.Specular, SpecularColor);
GL.Material(MaterialFace.FrontAndBack, MaterialParameter.Emission, EmissionColor);
GL.Material(MaterialFace.FrontAndBack, MaterialParameter.Shininess, Shinyness);
GL.Enable(EnableCap.Lighting);
}
else
{
GL.Disable(EnableCap.ColorMaterial);
GL.Disable(EnableCap.Lighting);
}
GL.ShadeModel(Shading);
GL.Translate(Position);
GL.Scale(Scale, Scale, Scale);
Draw(); // Draws triangles and quads to make up a shape
}
and for example to draw a quad surface you have
protected void DrawQuad(Color color, params Vector3[] nodes)
{
GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Fill);
GL.Enable(EnableCap.PolygonOffsetFill);
// special code when translucent
if (color.A<255)
{
GL.Enable(EnableCap.Blend);
GL.DepthMask(false);
}
GL.Begin(PrimitiveType.Quads);
GL.Color4(color); //this is where the color with alpha is used
for (int i=0; i<nodes.Length; i++)
{
GL.Vertex3(nodes[i]);
}
GL.End();
// special code when translucent
if (color.A<255)
{
GL.Disable(EnableCap.Blend);
GL.DepthMask(true);
}
}
also the code to draw the outline of a quad to be called after DrawQaud()
protected void DrawLineLoop(Color color, params Vector3[] nodes)
{
GL.Disable(EnableCap.PolygonOffsetFill);
GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Line);
if (color.A<255)
{
GL.Enable(EnableCap.Blend);
GL.DepthMask(false);
}
GL.Begin(PrimitiveType.LineLoop);
GL.Color4(color);
for (int i=0; i<nodes.Length; i++)
{
GL.Vertex3(nodes[i]);
}
GL.End();
if (color.A<255)
{
GL.Disable(EnableCap.Blend);
GL.DepthMask(true);
}
}
Finally I found solution of my question:
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.One);
GL.Enable(EnableCap.Blend);
//Definition of Plane
GL.Begin(BeginMode.Quads);
GL.Color4(0, 0.2, 1, 0.5);
GL.Vertex3(-100.0f, -25.0f, -150.0f);
GL.Vertex3(-100.0f, -25.0f, 150.0f);
GL.Vertex3( 200.0f, -25.0f, 100.0f);
GL.Vertex3( 200.0f, -25.0f, -100.0f);
GL.End();
GL.Disable(EnableCap.Blend);
In computing, Transparency effects are fained using color blending.For The special case of transprency, we talk about Àlpha Blending`
For transparency the blending factor is usualy stored in the 4th component of the colour (the A in RGBA) which stands for alpha. So you have to set all your colors with it.
example for half transparent blue (like glass):
GL.Color4(0,0,1,0.5f);
In OpenGL, blending have to be activated with the following command, which enable a supplementary stage on the rendering pipeline.
GL.Enable( EnableCap.Blend );
Then, because blending could be used to mix colors for other purpose than transparency, you have to specify the blending function to use. Here is the common function for transparency:
GL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);
http://www.opentk.com/doc/chapter/2/opengl/fragment-ops/blending
Simply use GL.Color4 instead of GL.Color3. The 4th value will be the alpha
I am pretty dumb to 3D, just like my question may be:
I made a "viewer" program in WPF that renders stuff on screen, and what may be rotated as well. I use this for rotation what works for my taste:
Code: WPF
Transform3DGroup tg = new Transform3DGroup();
tg.Children.Add(new ScaleTransform3D(sx, sy, 1));
tg.Children.Add(new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(1, 0, 0), rx)));
tg.Children.Add(new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 1, 0), ry)));
tg.Children.Add(new RotateTransform3D(new AxisAngleRotation3D(new Vector3D(0, 0, 1), rz)));
tg.Children.Add(new TranslateTransform3D(x, y, z));
darabka.Transform = tg;
TeljesModell.Children.Add(darabka);
I decided to make this program in XNA, because that seems to be a bit faster, however I could not make it to work.
Tried this way (Code, XNA):
Pozició = Matrix.CreateScale(sx, sy, 1f); //méretezés
Pozició *= Matrix.CreateFromAxisAngle(Vector3.UnitX, rx); //forgatás
Pozició *= Matrix.CreateFromAxisAngle(Vector3.UnitY, ry);
Pozició *= Matrix.CreateFromAxisAngle(Vector3.UnitZ, rz);
Pozició *= Matrix.CreateTranslation(x, y, z); //középpont
I tried even CreateFromYawPitchRoll, CreateRotationX(YZ) too, but without luck, the stuff drawed on screen was always differently rotated. So I guessed I ask the brains here if they know what math I am to put into between the two technologies.
Thanks in advance.
edit: I tried on other forums where I was asked for details. I copy / paste them here too
The XNA code is like:
main
...
protected override void LoadContent()
{
t3 = new Airplane(); //this is a "zone" object, having zone regions and zone objects in it. all of them together give the zone itself as a static object, where player walks in
Kamera.Példány.Pozició = new Vector3(1362, 627, -757); //starting pozition in the zone - this is camera position so the player's starting pozition. (camera is FPS)
...
}
...
protected override void Update(GameTime gameTime)
{
...
Kameramozgatása(); //kamera mozgatását vezérlő billentyűk
//this moves the FPS camera around
}
...
protected override void Draw(GameTime gameTime)
{
...
ÁltalánosGrafikaiObjektumok.Effect.View = Kamera.Példány.Idenézünk; //camera views this
ÁltalánosGrafikaiObjektumok.Effect.Projection = projectionMatrix; //... and camera views FROM is set in Kameramozgatása()
...
t3.Rajzolj(); //draw zone object
...
}
zone object
constructor: set the effect and device to the same as main, set the Pozició (a matrix containing the current position and rotations) to origo
...
public virtual Alapmodel Inicializálás(float x, float y, float z, float rx, float ry, float rz, float sx, float sy)
{
//this initializer starts when the coordinates are from a file and not set with Pozició = Matrix....., so we are to convert. this runs only one time. overrides set the vertex positions and textures
VertexDeclaration = new VertexDeclaration(device, VertexPositionTexture.VertexElements);
Pozició = Matrix.CreateScale(sx, sy, 1f); //méretezés
Pozició *= Matrix.CreateFromAxisAngle(Vector3.UnitX, rx); //forgatás
Pozició *= Matrix.CreateFromAxisAngle(Vector3.UnitY, ry);
Pozició *= Matrix.CreateFromAxisAngle(Vector3.UnitZ, rz);
Pozició *= Matrix.CreateTranslation(x, y, z); //középpont
//we set the starting position - nowhere else we do it, from here only the main loop can modify this,... could.. but does not. this is zone, and has only static objects - never moving
}
public void Rajzolj()
{
//this draws the object (what can be a zone or a static object in it)
foreach (EffectPass pass in effect.CurrentTechnique.Passes)
{
pass.Begin();
foreach (Elem elem in Elemek)
{
//végigmegyünk az elemeken
effect.Texture = elem.Textúra;
//pozicionálás
effect.World = Pozició; //we take the initialized position
effect.CommitChanges(); //this is needed. if you do not use this, the textures will be screwed up
device.VertexDeclaration = VertexDeclaration;
device.DrawUserIndexedPrimitives<VertexPositionTexture>(PrimitiveType.TriangleList, Poziciók, 0, Poziciók.Length, elem.Indexek, 0, elem.Indexek.Length / 3);
}
pass.End();
}
}
That's it. I still guess it is some conversion and not the drawing code. Nowhere else is the position altered. I guessed that matrices are just like WPFs stacked transforms - my problem was and is that I do not know the math to convert between the two. The WPF code works perfectly, the models in the zone show up good. The XNA code is bad somehow, because the Inicializálás() has wrong conversion from x,y,z,etc. in it. In this I do need help.
Thanks in advance.
I advise against storing your orientation in an angular fashion that way...
But there are times when your issue can be solved in the xna version simply by applying the Z rotation first, Then X, then Y.
You can use CreateWorld too, but that has no scaling in it, so I decided to solve this step-by-step. First is putting to origo, then scaling, then rotating the scaled, then pozitioning the scaled and rotated.
You are to use radian, instead of degree, if you have degree, use the MathHelper.ToRadians(), like I did.
If you come from WPF and your rotation angles are working there, but here not, try to multiply your angle with 360/512 before you calc it to radians. You see example for this in comment, you are to use it to all rx,ry and rz.
Pozició = Matrix.CreateTranslation(Vector3.Zero); //alappozíció az origó
if (sx != 1 || sy != 1) Pozició *= Matrix.CreateScale(sx, sy, 1f); //méretezés
if (rx != 0 || ry != 0 || rz != 0) Pozició *= Matrix.CreateFromYawPitchRoll(
MathHelper.ToRadians(rx) //WPFből: MathHelper.ToRadians(rx * 360f / 512f)
, MathHelper.ToRadians(ry)
, MathHelper.ToRadians(rz)
);
if (x != 0 || y != 0 || z != 0) Pozició *= Matrix.CreateTranslation(x, y, z); //középpont
If you can give faster formula then *360/512, or something that looks better, please comment.