Monogame sprite alpha blending - c#

At the moment, I am having a 2D grid of terrain textures, which I am drawing using the standard XNA/Monogame spritebatch.Draw() function. In order to have smoother transitions between adjacent but different terrain textures, I would like to blend textures within a certain area (say 3x3) around a center texture. The further a neighboring texture is away, the smaller should be its impact on the final image.
E.g., a (3x3) weighting matrix weight around a central texture at (1,1) could look something like this:
0.3 0.5 0.3
0.5 1.0 0.5
0.3 0.5 0.3
Basically, a simple problem. However, I am currently staggering with the different blending modes offered and their functionality. My initial idea was to just use BlendState.AlphaBlend together with
spriteBatch.Draw(Tex2D, TargetRec, new Color(Color.White, weight[x,y] / SumOfWeights));
The white color should give me the original texture colors, the alpha value weight[x,y] / SumOfWeights would in the end add up to 1. Instead, what I get is a very bright image plus the background shining through.
A better result can be achieved, when also setting the tinting Color to a gray with the same value as the alpha channel. Again, the background is shining through, though, when using more than 2 textures.
There must be a systematic error in my concept, but I am momentarily unable to find it. Please point out my mistake & thanks in advance.

Try this:
spriteBatch.Draw(Tex2D, TargetRec, Color.White * weight[x,y]);
This is the proper way to draw things transparently when SpriteBatch is using premultiplication. The alpha channel works differently when drawing with premultiplication. The basic change you need to make is to multiply the color by the transparency instead of setting the alpha channel of the color.
Here is some more information. http://blogs.msdn.com/b/shawnhar/archive/2009/11/06/premultiplied-alpha.aspx

Related

Allow non-pixel art game handle pixel art (integer movement)

I'm making a 2D platform game engine with C# and MonoGame that uses floating point numbers for all of its maths. It works well so far, however I'd like to have the option of using the engine for a retro-style pixel art game, which is obviously best done with integer maths. Is there a simple way to achieve this?
The simplest method I can come up with is to do all the calculations with floating point numbers, but then when I draw each sprite, I round the position to the nearest multiple of the scale of the pixel art (for example, to the nearest 5 pixels for pixel art that is scaled 5x). This functions, but the movement of the player character on the screen doesn't feel smooth.
I've tried rounding the player position itself each time I update it, but this breaks my collision detection, causing the player to be stuck on the floor.
Maybe there's a standard way people achieve a solution?
Thanks :)
Apologies for resurrecting an ancient question, but I think a very simple way to do this for both programmer and hardware using GPU triangle rasterization (I'm assuming you're using a GPU pipeline, as this is trivial otherwise) that most directly simulates the look and feel of old hardware rendering at 200p or so is just render your entire game to an offscreen texture that is that actual 200p resolution and perform your entire game logic, including collision detection, at that resolution. You can shift all the coordinates by half a pixel which, combined with nearest neighbor sampling, should get them to plot precisely at a desired pixel if you have to work in floating point.
Then just draw a rectangle with the offscreen texture to the screen scaled to the full display resolution using nearest neighbor sampling and integer-sized scalars (2x, 3x, 5x, etc). That should be so much simpler than scaling each individual sprite and tile.
This is assuming you want a full retro look and feel where you can't draw things at what would normally be sub-pixel positions (1 pixel increments after scaling 5x instead of 5 pixel increments, or even sub-pixel at full resolution). If you want something that feels more modern where the scaled up art would be able to transform (move, rotate, scale) and animate in single-pixel increments or even sub-pixel with floating point, then you do need to scale up every individual sprite and tile. I don't think that would feel so retro and low-res though, more like very modern with scaled up blocky pixel art. Operating at the original low resolution tends to impact not just the look of the game but the feel as well.

Draw a one pixel line around square sprite

I have a 15 x 15 pixel box, that I draw several off in different colours using:
spriteBatch.Draw(texture, position, colour);
What I'd like to do is draw a one pixel line around the outside, in different colours, thus making it a 17 x 17 box, with (for example), a blue outline one pixel wide and a grey middle.
The only way I can think of doing it is to draw two boxes, one 17x17 in the outline colour, one 15x15 with the box colour, and layer them to give the appearance of an outline:
spriteBatch.Draw(texture17by17, position, outlineColour);
spriteBatch.Draw(texture15by15, position, boxColour);
Obviously the position vector would need to be modified but I think that gives a clear picture of the idea.
The question is: is there a better way?
You can draw lines and triangles using DrawUserIndexedPrimitives, see Drawing 3D Primitives using Lists or Strips on MSDN for more details. Other figures like rectangles and circles are constructed from lines, but you'll need to implement them yourself.
To render lines in 2D, just use orthographic projection which mirrors transformation matrix from SpriteBatch.
You can find a more complete example with the PrimitiveBatch class which encapsulates the logic of drawing in the example Primitives from XBox Live Indie Games.
Considering XNA can't draw "lines" like OpenGL immediate mode can, it is far more efficient to draw a spite with a pre-generated texture quad (2 triangles) than to draw additional geometry with dynamic texturing particularly when a single "line" each requiring 1 triangle; 2 triangles vs 4 respectfully. Less triangles and vertices in the former too.
So I would not try to draw a "thin" line using additional geometry that is trying to mimic lines around the outside of the other, instead continue with what you are doing - drawing 2 different sprites (each is a quad anyway)
Every object drawn in 3D is drawn using triangles. - Would you like to know more?

Lens flare in 2D XNA 4.0 scene

I am creating realtime scene in XNA, it is 2D using sprites only (rendered on quads, standard spritebatch with alpha map on sprites). I would like to create create simply lens flare, actually only occlusion around light source (I donĀ“t need direction to center of camera to offset multiple sprites for lens flare, etc.) Only thing I basically need is to calculate how many pixels from light source sprite (small star) are rendered and according to it set scale of lens flare sprite (so scale 0 if sprite there are not visible pixels from relevant sprite).
I know how to do it in 3D, I read through this and tested few things:
http://my.safaribooksonline.com/book/programming/game-programming/9781849691987/1dot-applying-special-effects/id286698039
I would like to ask what is best and cheapest way to do it in 2D scene (counting how many pixels of sprite were rendered / occluded with per pixel precision or something comparable).
I know also stencil buffer could help but I am not sure how to applicate in this case.
Okay, two ways how to solve it, either kinda old school approach, using stencil to calculate count of occluded pixels and scale sprites of lens flare according to it.
Other way: modern approach, use screen space lens flare, isolate bright pixels (I recommend HDR rendering pipeline and use values of brightness above 1.0 to generate lens flares, but it depends on scene average and maximum) and generate ghosts, like so:
https://www.youtube.com/watch?v=_A0nKfzbs80&list=UUywPlxpmCZtuqOs6_bZEG9A

Making a lightsystem like Terraria?

I am trying to make my lighting similar to Terraria's, block-lighting. Now, I know how to make blocks darker, I can assign blocks to a certain lightlevel, but, how would I make an actual light entity, that emits light in a round shape (Can be diamond-shaped too)?
Help would be greatly appreciated, also, if I wasn't clear in my question, feel free to ask.
Basic 2D lighting is very simple. Just do a distance check from your block, to your light, and use that value to scale your light.
This is something you could do fairly simple, since Spritebatch.Draw has a nice Color tint parameter [link]
A pseudo function could be
distance = (block.position - light.position). Length();
lightPower = distance / light.MaxDistance;
finalTint = light.Color * lightPower;
Render Block, with finalTint
For more nice looking light, you could replace "distance / light.MaxDistance" with a more smooth effect.
If you also want lights to go through a few blocks like Terraria, you could count all blocks between your block and the light source. Scale your lightPower down by that amount, and you get the same effect like Terraria has.
Of course, this is a non optimized way of doing it, but should work.
The latest Terraria version however seems to have smooth per pixel lighting instead of per block [preview]. For that I assume they used a second render target and/or Pixel Shader to keep fast performance. This could be a little difficult if you are not familiar with rendering pipelines though.
Hope this helps!
I'm working on a game with a similar lighting model, and the way we do it is this:
Draw the scene, without lighting, to a render target (called the 'Scene Buffer')
Draw the scene's lights, represented as grayscale gradients of any required shape, to a second render target (called the 'Light Map')
Draw the Scene Buffer to the screen, passing in the Light Map as a parameter to the pixel shader
In the pixel shader, query the value of the Light Map at each pixel and adjust the color of the final pixel up or down as necessary.
This also gives you the ability to have colored lighting; all you have to do is tint the light gradients that you render to the Light Map. Just remember to use additive alpha blending.
The downside of this approach is that it's rather naive, and provides no easy way to occlude the lights (that is to say, they pass through walls). In our case, this isn't an issue; you might decide otherwise.

Issue with alpha in 2d lighting shader in XNA 4.0

I'm currently learning HLSL with XNA, I figured the best place to start after tutorials would be some simple 2D shaders. I'm attempting to implement a simple lighting shader in 2D.
I draw the scene without shadows to a rendertarget, swap my rendertarget to a shadowmap, draw my light(each individually) onto the shadowmap via alpha channel, swap my rendertarget back to default and render the scene then the shadows on top.
The alpha of the light changes depending on the distance of the current pixel and the point of the light, this is all working fine for me except when I render the scene, if two lights overlap it causes a nasty blending issue.
I'm using alphablend when I draw on the shadowmap, and when I draw the shadowmap to the scene.
Am I just using the wrong blending settings here? I don't know much about blendstates.
Sorry if the question was vague.
I've had this happen before when doing software lighting, where your values exceed 255 (or 1.0) and you end up back in the realms of blackness. I believe the values are clamped using a modulo operation causing 1.1 to be 0.1 or 256 to be 1. I notice how the black ellipse is actually two spheres edges combined, this is what is giving me this conclusion.
Hope this gets you closer to understanding and finding out your problem. I have no idea what code you are using already but in your HLSL technique pass you could try adding:
AlphaBlendEnable = true;
SrcBlend = One;
DestBlend = One;

Categories

Resources