I'm trying to write a simple raytracer as a hobby project and it's all working fine now, except I can't get soft-shadows to work at all. My idea of soft-shadows is that the lightsource is considered to have a location and a radius. To do a shadow test on this light I take the point where the primary ray hit an object in the scene and cast an n-amount of rays towards the lightsource where each new ray has a random component to every axis, where the random component varies between -radius and radius.
If such a ray hits an object in the scene, I increment a hitcounter (if a ray hits multiple objects, it still only increments with one). If it makes it to the lightsource without collisions, I add the distance of the primary ray's intersect point to the lightsource's center to a variable.
When n samples have been taken, I calculate the ratio of rays that have collided and multiply the color of the light by this ratio (so a light with color 1000,1000,1000 will become 500,500,500 with a ratio of 0.5, where half the rays have collided). Then I calculate the average distance to the lightsource by dividing the distance variable of earlier by the amount of non-colliding rays. I return that variable and the function exits.
The problem is: it doesn't work. Not quite at least. What it looks like can be seen here. You can see it sort of resembles soft-shadows, if you squint real hard.
I don't get it, am I making some sort of fundamental flaw here, or is it something tiny? I'm fairly sure the problem is in this method, because when I count the number of partially lit pixels produced directly by this method, there are only about 250, when there should be a lot more. And when you look closely at the picture, you can see there's some partially lit pixels, suggesting the rest of the code processes the partially lit pixels just fine.
Here's the actual light for soft-shadows class:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace MyFirstRayTracer
{
public class AreaLight : ILight
{
private const int _radius = 5;
private const int _samples = 16;
public Color Color { get; set; }
public Vector Location { get; set; }
#region ILight Members
public float GetLightingInformation(Vector point, ISceneObject[] scene, out Color color)
{
int intersectCount = 0;
float distance = -1;
for(int i = 0; i < _samples; i++)
{
bool intersects = false;
float rand = 0;
rand = _radius - (float)(new Random().NextDouble()*(2*_radius));
foreach (ISceneObject obj in scene)
{
Vector iPoint;
Vector loc = new Vector(Location.X + rand, Location.Y + rand, Location.Z + rand);
if (!obj.Intersect(new Ray(point, loc), out iPoint))
{
distance += (Location - point).SqLength;
}
else
{
intersects = true;
distance -= (Location - point).SqLength;
}
}
if (intersects)
intersectCount++;
}
float factor = 1-((float)intersectCount/_samples);
color = new Color(factor*Color.R, factor*Color.G, factor*Color.B);
return (float)Math.Sqrt(distance / (_samples - intersectCount));
}
#endregion
}
}
minor point but is this the best use of the random class..
for(int i = 0; i < _samples; i++)
{
bool intersects = false;
float rand = 0;
rand = _radius - (float)(new Random().NextDouble()*(2*_radius));
should this not be..
var rnd = new Random()
for(int i = 0; i < _samples; i++)
{
bool intersects = false;
float rand = 0;
rand = _radius - (float)(rnd.NextDouble()*(2*_radius));
Try generating a different "rand" for each component of "loc". As is, your jittered points all lie on a line.
You actually generate the point on the line on a line with direction (1, 1, 1). Is the lightsource really linear?
Also, I can barely see anything in your example. Could you make your camera nearer the to-be shadow and not pointing from the direction of the light?
See, this is why I come to this site :)
Every axis has its own random now, and it looks a lot better. It's still a little weird looking, increasing the number of samples helps though. It now looks like this.
Do you know a more efficient way to reduce the pattern-forming?
The biggest help though: not instantiating Random for every sample. It seriously tripled my rendering speed with soft shadows! I never knew that Random was so costly to instantiate. Wow.
Thanks a lot.
In your response you asked for an improved way to make soft shadows. An improvement could be, instead of randomizing all the rays from the same point, to give each ray a different offset on all axes to effectively give them a seperate little window to randomize in. This should result in a more even distribution. I don't know if that was clear but another way to describe it is as a grid which is perpendicular to the shadow ray. Each tile in the grid contains one of the n shadow rays but the location in the grid is random. Here you can find a part of a tutorial which describes how this can be used for soft shadows.
Related
I'm trying to make an "Explosion effect on a grid", and I need an algorithm that will allow me to do the following:
Note: The grid represents a List<List>, and I'm trying to filter out the red dots
So we start off with a given grid with black dots (in our case, black dots represent solid points on our grid, and red dots represent the points we remove from the list)
Eventually, our ractangle transformed into this random shape with holes on the edges (blue area)
My Attempt:
The Problem:
Sometimes the radius of my star/circle shape is pretty big and the output doesn't give me that "explosion" effect I'm looking for (basically, an unpredictable output), plus it really limits the shape.
Do you have any ideas or know of some mathematical algorithms that can help? Thanks for reading! :)
Sorry if I wasn't clear enough, but this is basically what I'm looking for:
This is straightforward approach to get template:
using System;
namespace ConsoleApp2
{
class Program
{
static void Main(string[] args)
{
int size = 10;
var grid = GetExplosionTemplate(size, new Random(),1);
for (int i = 0; i < size; i++)
{
for (int j = 0; j < size; j++)
{
Console.Write(grid[i + j * size] ? "X" : "O");
}
Console.WriteLine();
}
}
private static bool[] GetExplosionTemplate(int size, Random rnd = null, float explosionRoughness = 0)
{
var grid = new bool[size * size];
var rr = (size + 1) / 2f;
for (int i = 0; i < size; i++)
{
for (int j = 0; j < size; j++)
{
var cellId = i + j * size;
var r = Math.Sqrt(Math.Pow(i - size / 2, 2) + Math.Pow(j - size / 2, 2)) + rnd?.NextDouble()*explosionRoughness ?? 0;
grid[cellId] = r > rr;
}
}
return grid;
}
}
}
Roughness above 1 will give scattered inside explosion, below will give more round explosion. Value around 0.5 to 1.5 is great looking for both even/non-even values. You can play around to make it better looking, honestly for explosion physics I rather choose this roughness based on material in cell (sand penetration is better than stone, for example), cause you don't want fully fledged physics where you calculate explosion power traveling across weighted by resistance cells (for example, explosion in cave will travel along empty paths and slightly destruct environment, rather than create vacuum in radius)
Another simple approach to actually simulate explosion physics is to use recursion. You start at center with explosion power equal to some value, than as you travel wide, each cell will consume some part of that power (and may be destroyed in process), than equaly emits left part of its power to adjacent cells (even visited, so you simulate explosion wave). This way it will be more realistic in terms of materials. You can even simulate partially empty cell from resistance materials (like, iron fence, it is resistant but emits better and destroy everything around)
I've been programming an ability for a Hack n Slash which needs to check Units within a pie slice (or inbetween two angles with max length). But I'm stuck on how to check whether an unit is within the arc.
Scenario (Not enough, rep for an image sorry im new)
I currently use Physics2D.OverlapSphere() to get all of the objects within the maximum range. I then loop through all of the found objects to see whether they are within the two angles I specify. Yet this has janky results, probably because angles don't like negative values and value above 360.
How could I make this work or is there a better way to do this?
I probably need to change the way I check whether the angle is within the bounds.
Thanks in advance guys! I might respond with some delay as I won't be at my laptop for a couple hours.
Here is the code snippet:
public static List<EntityBase> GetEntitiesInArc(Vector2 startPosition, float angle, float angularWidth, float radius)
{
var colliders = Physics2D.OverlapCircleAll(startPosition, radius, 1 << LayerMask.NameToLayer("Entity"));
var targetList = new List<EntityBase>();
var left = angle - angularWidth / 2f;
var right = angle + angularWidth / 2f;
foreach (var possibleTarget in colliders)
{
if (possibleTarget.GetComponent<EntityBase>())
{
var possibleTargetAngle = Vector2.Angle(startPosition, possibleTarget.transform.position);
if (possibleTargetAngle >= left && possibleTargetAngle <= right)
{
targetList.Add(possibleTarget.GetComponent<EntityBase>());
}
}
}
return targetList;
}
Vector2.Angle(startPosition, possibleTarget.transform.position);
This is wrong. Imagine a line from the scene origin (0,0) to startPosition and a line to the transform.position. Vector2.Angle is giving you the angle between those two lines, which is not what you want to measure.
What you actually want is to give GetEntitiesInArc a forward vector then get the vector from the origin to the target position (var directionToTarget = startPosition - possibleTarget.transform.position) and measure Vector2.Angle(forward, directionToTarget).
I'm trying to create a simple mouse emulator controlled by a joystick's right thumbstick. I was trying to have the mouse move in the direction the stick pointed with a smooth gradient of pressure values dictating speed, but I've hit a number of snags when trying to do so.
The first is how to accurately translate the angle into accurate X and Y values. I can't find a way to implement the angle correctly. The way I have it, the diagonals are likely to move considerably faster than the cardinals.
I was thinking I need something like Math.Cos(angle) for the X values, and Math.Sin(angle) for the Y values to increment the mouse, but I can't think of a way to set it up.
The second, is smooth movement of the mouse, and this is probably the more important of the two. Since the SetPosition() function only works with integers, the rate at which pixels move over time seems very limited. The code I have is very basic, and only registers whole number values of 1-10. That not only creates small 'jumps' in acceleration, but limits diagonal movement as well.
The goal would to have something like 10 pixels-per-second, with the program running at 100hz, and each cycle outputting 0.1 pixel movement.
I'd imagine I might be able to keep track of the pixel 'decimals' for the X and Y values and add them to the axes when they build to whole numbers, but I'd imagine there's a more efficient way to do so and still not anger the SetPosition() function.
I feel like Vector2 objects should get this done, but I don't know how the angle would fit in.
Sample code:
//Poll Gamepad and Mouse. Update all variables.
public void updateData(){
padOne = GamePad.GetState(PlayerIndex.One, GamePadDeadZone.None);
mouse = Mouse.GetState();
currentStickRX = padOne.ThumbSticks.Right.X;
currentStickRY = padOne.ThumbSticks.Right.Y;
currentMouseX = mouse.X;
currentMouseY = mouse.Y;
angle = Math.Atan2(currentStickRY, currentStickRX);
vectorX = (int)( currentStickRX*10 );
vectorY = (int)( -currentStickRY*10 );
mouseMoveVector.X = vectorX;
mouseMoveVector.Y = vectorY;
magnitude = Math.Sqrt( Math.Pow( (currentStickRX - 0), 2 ) + Math.Pow( (currentStickRY - 0), 2 ) );
if (magnitude > 1){
magnitude = 1;
}
//Get values not in deadzone range and re-scale them from 0-1
if(magnitude >= deadZone){
activeRange = (magnitude - deadZone)/(1 - deadZone);
}
Console.WriteLine(); //Test Code
}
//Move mouse in in direction at specific rate.
public void moveMouse(){
if (magnitude > deadZone){
Mouse.SetPosition( (currentMouseX + vectorX), (currentMouseY + vectorY));
}
previousStickRX = currentStickRX;
previousStickRY = currentStickRY;
previousActiveRange = activeRange;
}
Note: I'm using all the xna frameworks.
Anyway, apologies if I'm explaining these things incorrectly. I haven't been able to find a good resource for this, and the vector examples I searched only move in integer increments and from point A to B.
Any help with any part of this is greatly appreciated.
I haven't tried it myself but from my point of view, you should normalize the pad axis after reading them, that way diagonals would move the same speed as cardinals. And for the second part, I would keep track of the mouse in floating variables, such as a Vector2 and do the cast (maybe rounding better) when setting the mouse position.
public void Start()
{
mousePosV2 = Mouse.GetState().Position.ToVector2();
}
public void Update(float dt)
{
Vector2 stickMovement = padOne.ThumbSticks.Right;
stickMovement.Normalize();
mousePosV2 += stickMovement*dt*desiredMouseSpeed;
/// clamp here values of mousePosV2 according to Screen Size
/// ...
Point roundedPos = new Point(Math.Round(mousePosV2.X), Math.Round(mousePosV2.Y));
Mouse.SetPosition(roundedPos.X, roundedPos.Y);
}
I'm been developing my own physics engine for a school project and recently I encountered a problem: Per pixel collisions on big sprites make my FPS drop A LOT.
Here's my pixel collision code. Before entering the following code I'm using Intersects() to see if their bounding boxes collided.
private bool PerPixelCollision(IObject a, IObject b)
{
Color[] bitsA = new Color[a.Texture.Width * a.Texture.Height];
a.Texture.GetData(bitsA);
Color[] bitsB = new Color[b.Texture.Width * b.Texture.Height];
b.Texture.GetData(bitsB);
// Calculate the intersecting rectangle
int x1 = Math.Max(a.Bounds.X, b.Bounds.X);
int x2 = Math.Min(a.Bounds.X + a.Bounds.Width, b.Bounds.X + b.Bounds.Width);
int y1 = Math.Max(a.Bounds.Y, b.Bounds.Y);
int y2 = Math.Min(a.Bounds.Y + a.Bounds.Height, b.Bounds.Y + b.Bounds.Height);
Color c;
Color d;
// For each single pixel in the intersecting rectangle
for (int y = y1; y < y2; ++y)
{
for (int x = x1; x < x2; ++x)
{
// Get the color from each texture
c = bitsA[(x - a.Bounds.X) + (y - a.Bounds.Y) * a.Texture.Width];
d = bitsB[(x - b.Bounds.X) + (y - b.Bounds.Y) * b.Texture.Width];
if (c.A != 0 && d.A != 0) // If both colors are not transparent (the alpha channel is not 0), then there is a collision
{
return true;
}
}
}
// If no collision occurred by now, we're clear.
return false;
}
In the example I'm using to test I'm dropping 4 sprites in another sprite that represents the floor (736x79). When I change the sprite that represents the floor to a bigger sprite (3600x270) the FPS drops. I know the problem is in the pixel collision because I'm using a profiler.
Does anyone have any idea on how to optimize the collision?
P.S.: Sorry if I wasn't clear enough about my problem. My english is not so good.
EDIT: The first problem was solved by using the solution provided by #pinckerman but I found another one related to pixel collision detection.
When a sprite collide with a the transparent part of a texture, we get the part that intersected and check all the pixels of that part. The problem is: When the transparent part is big enough to cover the whole sprite, I check the whole texture of that sprite (currently using 50x50 textures which is 2500 pixels). Is there any way to not check the whole texture?
Thanks
I'm pretty sure that your FPS drop so much because you're doing
Color[] bitsA = new Color[a.Texture.Width * a.Texture.Height];
a.Texture.GetData(bitsA);
Color[] bitsB = new Color[b.Texture.Width * b.Texture.Height];
b.Texture.GetData(bitsB);
at the beginning of your method.
I suppose you call your PerPixelCollision every Update loop and creating and coping millions of values every game cycle it's not a very efficient thing to do.
A big sprite creates a huge Color[] array: the bigger are your arrays, the slower will be this method.
EDIT:
For your second problem, I think that, if you don't know beforehand where is the transparent area of your "big" texture, you have to check the whole sprite anyway.
If your sprite it's not too big that's not a big waste of performance.
PS: I see that you get the intersecting Rectangle on your own, maybe you could find useful this method.
I load multiple meshs from .x files in different mesh variables.
Now I would like to calculate the bounding sphere across all the meshes I have loaded (and which are being displayed)
Please guide me how this could be achieved.
Can VertexBuffers be appended togather in one variable and the boundingSphere be computed using that? (if yes how are they vertexBuffers added togather)
Otherwise what alternative would you suggest!?
Thankx
Its surprisingly easy to do this:
You need to, firstly, average all your vertices. This gives you the center position.
This is done as follows in C++ (Sorry my C# is pretty rusty but it should give ya an idea):
D3DXVECTOR3 avgPos;
const rcpNum = 1.0f / (float)numVerts; // Do this here as divides are far more epxensive than multiplies.
int count = 0;
while( count < numVerts )
{
// Instead of adding everything up and then dividing by the number (which could lead
// to overflows) I'll divide by the number as I go along. The result is the same.
avgPos.x += vert[count].pos.x * rcpNum;
avgPos.y += vert[count].pos.y * rcpNum;
avgPos.z += vert[count].pos.z * rcpNum;
count++;
}
Now you need to go through every vert and work out which vert is the furthest away from the center point.
Something like this would work (in C++):
float maxSqDist = 0.0f;
int count = 0;
while( count < numVerts )
{
D3DXVECTOR3 diff = avgPos - vert[count].pos;
// Note we may as well use the square length as the sqrt is very expensive and the
// maximum square length will ALSO be the maximum length and yet we only need to
// do one sqrt this way :)
const float sqDist = D3DXVec3LengthSq( diff );
if ( sqDist > maxSqDist )
{
maxSqDist = sqDist;
}
count++;
}
const float radius = sqrtf( maxSqDist );
And you now have your center position (avgPos) and your radius (radius) and, thus, all the info you need to define a bounding sphere.
I have an idea, what I would do is that I would determine the center of every single mesh object, and then determine the center of the collection of mesh objects by using the aforementioned information ...