I am creating realtime scene in XNA, it is 2D using sprites only (rendered on quads, standard spritebatch with alpha map on sprites). I would like to create create simply lens flare, actually only occlusion around light source (I don´t need direction to center of camera to offset multiple sprites for lens flare, etc.) Only thing I basically need is to calculate how many pixels from light source sprite (small star) are rendered and according to it set scale of lens flare sprite (so scale 0 if sprite there are not visible pixels from relevant sprite).
I know how to do it in 3D, I read through this and tested few things:
http://my.safaribooksonline.com/book/programming/game-programming/9781849691987/1dot-applying-special-effects/id286698039
I would like to ask what is best and cheapest way to do it in 2D scene (counting how many pixels of sprite were rendered / occluded with per pixel precision or something comparable).
I know also stencil buffer could help but I am not sure how to applicate in this case.
Okay, two ways how to solve it, either kinda old school approach, using stencil to calculate count of occluded pixels and scale sprites of lens flare according to it.
Other way: modern approach, use screen space lens flare, isolate bright pixels (I recommend HDR rendering pipeline and use values of brightness above 1.0 to generate lens flares, but it depends on scene average and maximum) and generate ghosts, like so:
https://www.youtube.com/watch?v=_A0nKfzbs80&list=UUywPlxpmCZtuqOs6_bZEG9A
Related
guys! We are developing a 2D game for desktop and mobile. It is grid-based although we decided not to use tilemaps and, instead, create our own grid by code, because we need to program different actions and interactions for each tile. The game UI is on a canvas and that part resizes as we expected. Also, everything already works quite well "functionality-wise" on both, mobile and desktop. The problem is that, as we made the main grid and objects as sprites, it works great on any 16:9 screen, but some of the screen space gets cut when it runs on any wider screen.
How can we resize the whole sprite scene depending on screen size? I guess it has more to do with the camera than with the actual objects but we don´t have a clear clue. We already looked into "pixel-perfect camera" and, although we haven´t dug too deeply into it, it looks like it´s aimed more towards preserving artwork at full resolution and not so much at what we need.
This one is from the PC where we are developing the game and where it looks as it should:
And this one is from a PC with a wider screen (16:10) where the scene gets cropped from the sides (In the previous picture I marked in orange/yellow the columns that are lost in this one)
I guess there should be a way to stretch all sprites to fit the screen, but I think the best way to go would be to get empty horizontal or vertical bars, on top and bottom or to the sides, in order to preserve the exact proportions, and that would be good enough. But how to do it?
Thanks in advance.
Solved! Starting with the great ideas ephb gave me, I finally decided to get a reference percentage between the screen width and height. So, for 1920x1080, which is the resolution I know the game looks correct in, I did a rule of three and got that 1080 is 56.25% of the screen width, and, that in that case, the camera size should be 5.
Knowing those two elements, now I can check the height proportion for the user´s device and, using another rule of three, calculate the correct camera size, like this:
Camera cam;
void Awake()
{
cam = Camera.main;
cam.orthographicSize = GetHeightProportion() * 5 / 56.25f; //1920x1080 reference ---
}
float GetHeightProportion()
{
return ((float)Screen.height * 100) / (float)Screen.width;
}
I think you are looking to resize fore any aspect ration not screen resolution. You use the aspect ratio dropdown of the game window to see how it looks without running it on different computers.
The default setting is that the vertical field of view of your camera stays the same. Since the 16:10 display is taller this results in your image appearing zoomed with the sides being cut off.
Since you are using sprites and you basically have to recreate what the canvas scaler does but in world space.
You could move your camera, change its FOV if it a perspective camera, change its size if it is an orthographic camera or scale your scene. I will describe the first steps.
Get Screen width and height and calculate the aspect ratio.
Compare against your target aspect ratio. (I assume it is 16/9)
Use this multiplier to scale your scene / move your camera / change it's field of view
So as you see in the picture, i made a texture repeat on a rectangle(its size is 40,10,60) but it repeat the same amount of time on every face,so depending of the size of the face the texture is stretched.
In the picture you see that on the top face the texture repeat correctly and keep its original size but on the other faces it is streched.
Is there a way to repeat the texture without changing its size ?
Thank you for your responses.
screen of the problem
Edit : this script in c# does exactly what i want but is there a way to do it without a script since it was done in 2017 ?
https://github.com/Dsphar/Cube_Texture_Auto_Repeat_Unity/blob/master/ReCalcCubeTexture.cs
Unfortunately you will probably need to create a new material with the same image for each different scale using the 'Tiling' attribute:
*** Edit #1
The x and y Tiling values need to proportional to the scale of the plane or it will stretch.
If the size of the mesh being textured is static, you can change its UVs in a 3d program. You could even change the UVs via script.
Another option would be to look into worldspace (triplanar) shaders. These texture based on world position rather than the vertices local position.
If you are using Shader Graph, look at the triplanar node.
https://docs.unity3d.com/Packages/com.unity.shadergraph#6.9/manual/Triplanar-Node.html
I'm making a 2D platform game engine with C# and MonoGame that uses floating point numbers for all of its maths. It works well so far, however I'd like to have the option of using the engine for a retro-style pixel art game, which is obviously best done with integer maths. Is there a simple way to achieve this?
The simplest method I can come up with is to do all the calculations with floating point numbers, but then when I draw each sprite, I round the position to the nearest multiple of the scale of the pixel art (for example, to the nearest 5 pixels for pixel art that is scaled 5x). This functions, but the movement of the player character on the screen doesn't feel smooth.
I've tried rounding the player position itself each time I update it, but this breaks my collision detection, causing the player to be stuck on the floor.
Maybe there's a standard way people achieve a solution?
Thanks :)
Apologies for resurrecting an ancient question, but I think a very simple way to do this for both programmer and hardware using GPU triangle rasterization (I'm assuming you're using a GPU pipeline, as this is trivial otherwise) that most directly simulates the look and feel of old hardware rendering at 200p or so is just render your entire game to an offscreen texture that is that actual 200p resolution and perform your entire game logic, including collision detection, at that resolution. You can shift all the coordinates by half a pixel which, combined with nearest neighbor sampling, should get them to plot precisely at a desired pixel if you have to work in floating point.
Then just draw a rectangle with the offscreen texture to the screen scaled to the full display resolution using nearest neighbor sampling and integer-sized scalars (2x, 3x, 5x, etc). That should be so much simpler than scaling each individual sprite and tile.
This is assuming you want a full retro look and feel where you can't draw things at what would normally be sub-pixel positions (1 pixel increments after scaling 5x instead of 5 pixel increments, or even sub-pixel at full resolution). If you want something that feels more modern where the scaled up art would be able to transform (move, rotate, scale) and animate in single-pixel increments or even sub-pixel with floating point, then you do need to scale up every individual sprite and tile. I don't think that would feel so retro and low-res though, more like very modern with scaled up blocky pixel art. Operating at the original low resolution tends to impact not just the look of the game but the feel as well.
I'm trying to create tiled terrain in 3D with XNA. I checked tutorials on how to doit(Riemers and Allens). Allens tutorial has an exact result I want to achieve, however I'm not sure about performance - it seems he is using single quadrilateral to draw all terrain and process it with pixel shader, it means - whole terrain will be processed each frame.
Currently I'm drawing a quadrilateral for each tile(Example) - it allows to draw visible tiles only, but it also means that much more verticies need to be processed in each frame and a lot of "DrawIndexedPrimitives" is called.
Am I doing it right or Allens way is faster? Is there a way to do tiled terrain better?
Thanks.
Totally depends on your terrain complexity and size. Typically, you will have terrain tiles with more than one quad/tile (for instance, a tile could consist of 4096 triangles) and then displace the vertices to get the terrain you want. Still, each tile will be a indexed primitive, but a single draw call will result in lots of triangles and a larger part of the terrain. Taking this idea further, you can make the tiles in the distance larger so you don't get too much detail (look for quad-tree/clipmap based terrain approaches; you'll get something like this: http://twitpic.com/89y5kn.)
Alternatively, if you can displace in the vertex shader, you can use instancing to further reduce the amount of draw calls. Per-instance, you pass the UV coordinates into your heighfield and the world-space position and then you again render high-resolution tiles, but now you may wind up with a single draw call for the whole terrain.
For a small game, you might want to generate only a few high-resolution tiles (65k triangles or so) and then frustum-cull them. That gives you a large terrain easily and is still manageable; but this definitely doesn't scale too well :) Depends on your needs.
For the texture tiles, you can also use a low-resolution index texture and do the lookup into an atlas per-pixel or just store the indices in the vertex buffer and interpolate them (this is very common: Store 4 weights per vertex and use it to look up into four different textures.)
I'm currently learning HLSL with XNA, I figured the best place to start after tutorials would be some simple 2D shaders. I'm attempting to implement a simple lighting shader in 2D.
I draw the scene without shadows to a rendertarget, swap my rendertarget to a shadowmap, draw my light(each individually) onto the shadowmap via alpha channel, swap my rendertarget back to default and render the scene then the shadows on top.
The alpha of the light changes depending on the distance of the current pixel and the point of the light, this is all working fine for me except when I render the scene, if two lights overlap it causes a nasty blending issue.
I'm using alphablend when I draw on the shadowmap, and when I draw the shadowmap to the scene.
Am I just using the wrong blending settings here? I don't know much about blendstates.
Sorry if the question was vague.
I've had this happen before when doing software lighting, where your values exceed 255 (or 1.0) and you end up back in the realms of blackness. I believe the values are clamped using a modulo operation causing 1.1 to be 0.1 or 256 to be 1. I notice how the black ellipse is actually two spheres edges combined, this is what is giving me this conclusion.
Hope this gets you closer to understanding and finding out your problem. I have no idea what code you are using already but in your HLSL technique pass you could try adding:
AlphaBlendEnable = true;
SrcBlend = One;
DestBlend = One;