MonoGame: stencil buffer not working - c#

I'm trying to add shadows to my MonoGame-based 2D game. At first I just rendered semitransparent black textures to the frame, but they sometimes overlap and it looks nasty:
I tried to render all shadows into the stencil buffer first, and then use a single semitransparent texture to draw all shadows at once using the stencil buffer. However, it doesn't work as expected:
The two problems are:
The shadows are rendered into the scene
The stencil buffer is seemingly unaffected: the semitransparent black texture covers the entire screen
Here's the initialization code:
StencilCreator = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.Always,
StencilPass = StencilOperation.Replace,
ReferenceStencil = 1
};
StencilRenderer = new DepthStencilState
{
StencilEnable = true,
StencilFunction = CompareFunction.Equal,
ReferenceStencil = 1,
StencilPass = StencilOperation.Keep
};
var projection = Matrix.CreateOrthographicOffCenter(0, GraphicsDevice.Viewport.Width, GraphicsDevice.Viewport.Height, 0, 0, 1);
var halfPixel = Matrix.CreateTranslation(-0.5f, -0.5f, 0);
AlphaEffect = new AlphaTestEffect(GraphicsDevice)
{
DiffuseColor = Color.White.ToVector3(),
AlphaFunction = CompareFunction.Greater,
ReferenceAlpha = 0,
World = Matrix.Identity,
View = Matrix.Identity,
Projection = halfPixel * projection
};
MaskingTexture = new Texture2D(GameEngine.GraphicsDevice, 1, 1);
MaskingTexture.SetData(new[] { new Color(0f, 0f, 0f, 0.3f) });
ShadowTexture = ResourceCache.Get<Texture2D>("Skins/Common/wall-shadow");
And the following code is the body of my Draw method:
// create stencil
GraphicsDevice.Clear(ClearOptions.Stencil, Color.Black, 0f, 0);
var batch = new SpriteBatch(GraphicsDevice);
batch.Begin(SpriteSortMode.Immediate, null, null, StencilCreator, null, AlphaEffect);
foreach (Vector2 loc in ShadowLocations)
{
newBatch.Draw(
ShadowTexture,
loc,
null,
Color.White,
0f,
Vector2.Zero,
2f,
SpriteEffects.None,
0f
);
}
batch.End();
// render shadow texture through stencil
batch.Begin(SpriteSortMode.Immediate, null, null, StencilRenderer, null);
batch.Draw(MaskingTexture, GraphicsDevice.Viewport.Bounds, Color.White);
batch.End();
What could possibly be the problem? The same code worked fine in my XNA project.

I worked around the issue by using a RenderTarget2D instead of the stencil buffer. Drawing solid black shadows to a RT2D and then drawing the RT2D itself into the scene with a semitransparent color does the trick and is much simplier to implement.

Related

Render to texture fails after resize

In a graphic application i am rendering an image to a texture, then i use that texture on a 3d model.
My problem is the following:
When the application starts everything is fine, but if i resize the view where i do the rendering and i make it bigger, the texture on the 3d model disappear (it doesnt turn black, i think that all values become 1). Making the image smaller doesnt make the texture to disappear, but it is shown incorrectly (not resized).
Here are some explanatory images:
Resize smaller
Not resized
Resize bigger, 1 pixel bigger is enough to make image disappear.
The code that generate the renderview is this:
private void CreateRenderToTexture(Panel view)
{
Texture2DDescription t2d = new Texture2DDescription()
{
Height = view.Height,
Width = view.Width,
Format = Format.R32G32B32A32_Float,
BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget, //| BindFlags.UnorderedAccess,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SampleDescription = new SampleDescription(_multisample, 0),
MipLevels = 1,
Usage = ResourceUsage.Default,
ArraySize = 1,
};
_svgTexture = new Texture2D(_device, t2d);
_svgRenderView = new RenderTargetView(_device, _svgTexture);
}
private void RenderSVGToTexture()
{
_camera.SetDefaultProjection();
UpdatePerFrameBuffers();
_dc.OutputMerger.SetTargets(_depthStencil, _svgRenderView);//depth stencil has same dimension as all other buffers
_dc.ClearRenderTargetView(_svgRenderView, new Color4(1.0f, 1.0f, 1.0f));
_dc.ClearDepthStencilView(_depthStencil, DepthStencilClearFlags.Depth | DepthStencilClearFlags.Stencil, 1.0f, 0);
Entity e;
if (RenderingManager.Scene.Entity2DExists("svgimage"))
{
RenderingManager.Scene.GetEntity2D("svgimage", out e);
e.Draw(_dc);
}
_swapChain.Present(0, PresentFlags.None);
}
When rendering the 3D scene i call this function before rendering the model:
private void SetTexture()
{
Entity e;
if (!RenderingManager.Scene.GetEntity3D("model3d", out e))
return;
e.ShaderType = ResourceManager.ShaderType.MAIN_MODEL;
if (ResourceManager.SVGTexture == null )
{
e.ShaderType = ResourceManager.ShaderType.PNUVNOTEX;
return;
}
SamplerDescription a = new SamplerDescription();
a.AddressU = TextureAddressMode.Wrap;
a.AddressV = TextureAddressMode.Wrap;
a.AddressW = TextureAddressMode.Wrap;
a.Filter = Filter.MinPointMagMipLinear;
SamplerState b = SamplerState.FromDescription(ResourceManager.Device, a);
ShaderResourceView svgTexResourceView = new ShaderResourceView(ResourceManager.Device, Texture2D.FromPointer(ResourceManager.SVGTexture.ComPointer));
ResourceManager.Device.ImmediateContext.PixelShader.SetShaderResource(svgTexResourceView, 0);
ResourceManager.Device.ImmediateContext.PixelShader.SetSampler(b, 0);
b.Dispose();
svgTexResourceView.Dispose();
}
Pixel shader:
Texture2D svg : register(t0);
Texture2D errorEstimate : register(t1);
SamplerState ss : register(s0);
float4 main(float4 position : SV_POSITION, float4 color : COLOR, float2 uv : UV) : SV_Target
{
return color * svg.Sample(ss, uv);// *errorEstimate.Sample(ss, uv);
}
I dont understand what i am doing wrong, i hope that you can make me see the mistake that i am doing. Thank you, and sorry for the bad english!
As it (almost) always turn out i was making a very stupid mistake.
I wasn't calling the correct resize function.
Basically in the Renderer2D class there is a DoResize function that does the resize of the 2d only buffers, while in the abstract Renderer class there is the rest of the buffers resizing. The mistake is that in the parent class i was calling the wrong base resize function!
Parent class:
protected override void DoResize(uint width, uint height)
{
if (width == 0 || height == 0)
return;
base.DoResize(width, height); //Here i was calling base.Resize (which was deprecated after a change in the application architecture)
_camera.Width = width;
_camera.Height = height;
_svgTexture.Dispose();
_svgRenderView.Dispose();
CreateRenderToTexture(_viewReference);
ResizePending = false;
}
Base class
protected virtual void DoResize(uint width, uint height)
{
Width = width;
Height = height;
_viewport = new Viewport(0, 0, Width, Height);
_renderTarget.Dispose();
if (_swapChain.ResizeBuffers(2, (int)width, (int)height, Format.Unknown, SwapChainFlags.AllowModeSwitch).IsFailure)
Console.WriteLine("An error occured while resizing buffers.");
using (var resource = Resource.FromSwapChain<Texture2D>(_swapChain, 0))
_renderTarget = new RenderTargetView(_device, resource);
_depthStencil.Dispose();
CreateDepthBuffer();
}
Maybe the code i posted can be of help for someone who is trying to do some render to texture stuff, since i see that there is always people that can't make it work :)

Vertex data is not drawn after DrawArrays call

Getting started using SharpGL after using other frameworks for OpenGL in C# I decided to start with the most simplest of examples to make sure I understood any syntax changes/niceties of SharpGL.
So I'm attempting to render a single solid coloured triangle which shouldn't be too difficult.
I have two Vertex Buffers, one that stores the points of the Triangle and the other that stores the colours at each of the points. These are built up like so (The points one is the same except it uses the points array):
var colorsVboArray = new uint[1];
openGl.GenBuffers(1, colorsVboArray);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, colorsVboArray[0]);
this.colorsPtr = GCHandle.Alloc(this.colors, GCHandleType.Pinned).AddrOfPinnedObject();
openGl.BufferData(OpenGL.GL_ARRAY_BUFFER, this.colors.Length * Marshal.SizeOf<float>(), this.colorsPtr,
OpenGL.GL_STATIC_DRAW);
These are then set with the correct attrib pointer and enabled:
openGl.VertexAttribPointer(0, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
openGl.EnableVertexAttribArray(0);
But now when I draw using the call:
openGl.DrawArrays(OpenGL.GL_TRIANGLES, 0, 3);
I don't get anything on the screen. No exceptions, but the background is simply blank.
Naturally I presumed that there were compilation issues with my shaders, fortunately SharpGL gives me an easy way of checking and both the Vertex and Fragment shaders are showing as correctly compiled and linked.
Can anyone see why this code does not correctly display any objects, it's basically the same code that I've used before.
Full Source:
internal class Triangle
{
private readonly float[] colors = new float[9];
private readonly ShaderProgram program;
private readonly float[] trianglePoints =
{
0.0f, 0.5f, 0.0f,
0.5f, -0.5f, 0.0f,
-0.5f, -0.5f, 0.0f
};
private IntPtr colorsPtr;
private IntPtr trianglePointsPtr;
private readonly VertexBufferArray vertexBufferArray;
public Triangle(OpenGL openGl, SolidColorBrush solidColorBrush)
{
for (var i = 0; i < this.colors.Length; i+=3)
{
this.colors[i] = solidColorBrush.Color.R / 255.0f;
this.colors[i + 1] = solidColorBrush.Color.G / 255.0f;
this.colors[i + 2] = solidColorBrush.Color.B / 255.0f;
}
this.vertexBufferArray = new VertexBufferArray();
this.vertexBufferArray.Create(openGl);
this.vertexBufferArray.Bind(openGl);
var colorsVboArray = new uint[1];
openGl.GenBuffers(1, colorsVboArray);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, colorsVboArray[0]);
this.colorsPtr = GCHandle.Alloc(this.colors, GCHandleType.Pinned).AddrOfPinnedObject();
openGl.BufferData(OpenGL.GL_ARRAY_BUFFER, this.colors.Length * Marshal.SizeOf<float>(), this.colorsPtr,
OpenGL.GL_STATIC_DRAW);
var triangleVboArray = new uint[1];
openGl.GenBuffers(1, triangleVboArray);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, triangleVboArray[0]);
this.trianglePointsPtr = GCHandle.Alloc(this.trianglePoints, GCHandleType.Pinned).AddrOfPinnedObject();
openGl.BufferData(OpenGL.GL_ARRAY_BUFFER, this.trianglePoints.Length * Marshal.SizeOf<float>(), this.trianglePointsPtr,
OpenGL.GL_STATIC_DRAW);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER,triangleVboArray[0]);
openGl.VertexAttribPointer(0, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
openGl.BindBuffer(OpenGL.GL_ARRAY_BUFFER, colorsVboArray[0]);
openGl.VertexAttribPointer(1, 3, OpenGL.GL_FLOAT, false, 0, IntPtr.Zero);
openGl.EnableVertexAttribArray(0);
openGl.EnableVertexAttribArray(1);
var vertexShader = new VertexShader();
vertexShader.CreateInContext(openGl);
vertexShader.SetSource(new StreamReader(
Assembly.GetExecutingAssembly()
.GetManifestResourceStream(#"OpenGLTest.Shaders.Background.SolidColor.SolidColorVertex.glsl"))
.ReadToEnd());
vertexShader.Compile();
var fragmentShader = new FragmentShader();
fragmentShader.CreateInContext(openGl);
fragmentShader.SetSource(new StreamReader(
Assembly.GetExecutingAssembly()
.GetManifestResourceStream(#"OpenGLTest.Shaders.Background.SolidColor.SolidColorFragment.glsl"))
.ReadToEnd());
fragmentShader.Compile();
this.program = new ShaderProgram();
this.program.CreateInContext(openGl);
this.program.AttachShader(vertexShader);
this.program.AttachShader(fragmentShader);
this.program.Link();
}
public void Draw(OpenGL openGl)
{
this.program.Push(openGl, null);
this.vertexBufferArray.Bind(openGl);
openGl.DrawArrays(OpenGL.GL_TRIANGLES, 0, 3);
this.program.Pop(openGl, null);
}
}
Vertex Shader:
#version 430 core
layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec3 vertex_color;
out vec3 color;
void main()
{
color = vertex_color;
gl_Position = vec4(vertex_position, 1.0);
}
Fragment Shader:
#version 430 core
in vec3 colour;
out vec4 frag_colour;
void main ()
{
frag_colour = vec4 (colour, 1.0);
}
Fixed this in the end fairly simply.
I had previously reviewed the SharpGL code and had noted that GL_DEPTH_TEST had been enabled, so I had presumed that the GL_DEPTH_BUFFER_BIT had been correctly cleared and I didn't have to do this.
After reviewing the Render code in SharpGL it turns out that this is not cleared by default but instead the onus is passed to the user to correctly clear the depth buffer.
Therefore I needed a simple call to clear to fix this:
this.openGl.Clear(OpenGL.GL_DEPTH_BUFFER_BIT);

How to change pixels per unit for sprite after importing with WWW in unity?

I am currently making a level editor where the user imports tiles from a file, and it currently works, except for the fact that I want the pixels per unit for each imported sprite to change to 32
Here is my code:
//Get tiles from file
StreamReader reader = new StreamReader(Application.dataPath + "/../Maps/" + mapName + "/Tiles/tiles.txt");
string line = reader.ReadLine ();
while (!string.IsNullOrEmpty (line)) {
string[] param = line.Split (',');
foreach (TileTexture t in tileTextures) {
if (t.name == param [0]) {
Sprite sprite = Sprite.Create (t.texture, new Rect (0, 0, t.texture.width, t.texture.height), new Vector2 (0, 0));
sprite.pixelsPerUnit = 32;//THIS LINE DOESNT WORK, GIVES READONLY ERROR
Tile tile = new Tile (param[0], sprite, new Vector2(float.Parse(param[1]), float.Parse(param[2])));
tile.sprite.texture.filterMode = FilterMode.Point;
tiles.Add (tile);
}
}
line = reader.ReadLine ();
}
Looking at the function Sprite.Create() we see that the function signature is
public static Sprite Create(Texture2D texture,
Rect rect,
Vector2 pivot,
float pixelsPerUnit = 100.0f,
uint extrude = 0,
SpriteMeshType meshType = SpriteMeshType.Tight,
Vector4 border = Vector4.zero);
We see that we can pass the pixelsPerUnit as an optional parameter into the function. You can only do this here, and you cannot change it later, because, as you have found out, the field pixelsPerUnit is readonly field (meaning it cannot be changed). So, you just need to pass in your 32f here. Correct code would be
if (t.name == param [0]) {
Sprite sprite = Sprite.Create (t.texture, new Rect (0, 0, t.texture.width, t.texture.height), new Vector2 (0, 0), 32f);
Tile tile = new Tile (param[0], sprite, new Vector2(float.Parse(param[1]), float.Parse(param[2])));
tile.sprite.texture.filterMode = FilterMode.Point;
tiles.Add (tile);
}

Generating an alpha mask from a texture

I am trying to create a function in one of my helper classes that will take a texture (left) and then generate an alpha mask (right)
Here's what I have so far:
public Texture2D CreateAlphaMask(Texture2D texture)
{
if (texture == null)
return null;
{
RenderTarget2D target = new RenderTarget2D(Device, texture.Width, texture.Height);
Device.SetRenderTarget(target);
Device.Clear(Color.Black);
using (SpriteBatch batch = new SpriteBatch(Device))
{
BlendState blendState = new BlendState()
{
AlphaBlendFunction = BlendFunction.Max,
AlphaSourceBlend = Blend.One,
AlphaDestinationBlend = Blend.One,
ColorBlendFunction = BlendFunction.Add,
ColorSourceBlend = Blend.InverseDestinationColor,
ColorDestinationBlend = Blend.Zero,
BlendFactor = Color.White,
ColorWriteChannels = ColorWriteChannels.All
};
batch.Begin(0, blendState);
batch.Draw(texture, Vector2.Zero, Color.White);
batch.End();
}
Device.SetRenderTarget(null);
return target;
}
}
What should be happening is that if alpha=0, then the pixel is black, and if alpha=1, then the pixel is white (and interpolated between these values if needed).
However, I can't seem to make it go "whiter" than the base image on the left. That is, if I set it to blend white, then at most it will go to the grey tones that I have, but never brighter. This isn't something I can create in advance, either, as it must be calculated during the game.

How do I scale image in DirectX device?

I have a control (picturebox), in which I want to draw an 2D image with the help of DirectX.
I have a display and sprite, and it works fine.
// Some code here...
_device = new Device(0, DeviceType.Hardware, pictureBox1,
CreateFlags.SoftwareVertexProcessing,
presentParams);
_sprite = new Sprite(_device);
// ...
_sprite.Draw(texture, textureSize, _center, position, Color.White);
The problem is that texture size is much larger than picturebox, and I just want to find the way to fit it there.
I tried to set device.Transform property but it doesn't help:
var transform = GetTransformationMatrix(textureSize.Width, textureSize.Height);
_device.SetTransform(TransformType.Texture0, transform);
Here is GetTransform method:
Matrix GetTransformationMatrix(float width, float height)
{
var scaleWidth = pictureBox1.Width /width ;
var scaleHeight = pictureBox1.Height / height;
var transform = new Matrix();
transform.M11 = scaleWidth;
transform.M12 = 0;
transform.M21 = 0;
transform.M22 = scaleHeight;
return transform;
}
Thanks for any solution || help.
The solution was just to use this:
var transformMatrix = Matrix.Scaling(scaleWidth, scaleHeight, 0.0F);
instead of hand-made method.

Categories

Resources