I created a C# program that uses managedCUDA to calculate the bodies interaction between lots of "planets" or "balls". I got CUDA to work properly with single float and int calculations as tests, but now with arrays, it doesn't seem to work properly. I have the same struct define both in my C# program and in the kernel :
struct Ball
{
float2 position;
float2 velocity;
float mass;
};
Here's the code I use for initializing the kernel in my C# program :
//initializes the CUDA context
cuda = new CudaContext();
//Loads the two kernels, velocity calculation and positions updating according to the velocity
UpdateBallGravity = cuda.LoadKernel("kernel.ptx", "UpdateBallGravity");
UpdateBallPosition = cuda.LoadKernel("kernel.ptx", "UpdateBallPosition");
//allocates gpu memory for a new Ball[] and copies it
d_balls = new Ball[1024];
//generates new balls on the gpu memory
Random random = new Random();
for (int i = 0; i < d_balls.Size; i++)
{
d_balls[i] = new Ball(
(float)random.NextDouble() * ClientSize.X,
(float)random.NextDouble() * ClientSize.Y,
(float)random.NextDouble() * 20000);
}
When I'm about to render, I put a breakpoint to check the values comming from the gpu and find that, after updating the balls velocities and positions, I get NaN in both the position and velocities members of each ball. The mass doesn't change, since I didn't modify it in the kernel. Here are both kernels :
__global__ void UpdateBallGravity(Ball *balls, int ballCount, float gravityInfluence)
{
int idx = getGlobalIdx_3D_3D();
if (idx >= ballCount)
return;
float2 gravity = float2();
for (int i = 0; i < ballCount; i++)
{
if (i == idx)
continue;
Ball remote = balls[i];
float2 difference = make_float2(remote.position.x - balls[idx].position.x, remote.position.y - balls[idx].position.y);
float f = (balls[idx].mass + remote.mass) / lengthSquared2f(difference);
gravity.y += difference.y*f;
}
balls[idx].velocity.x += gravity.x*gravityInfluence;
balls[idx].velocity.y += gravity.y*gravityInfluence;
}
__global__ void UpdateBallPosition(Ball *balls, int ballCount)
{
int idx = getGlobalIdx_3D_3D();
if (idx >= ballCount)
return;
balls[idx].position.x += balls[idx].velocity.x;
balls[idx].position.y += balls[idx].velocity.y;
}
Related
I'm very new at coding (like two weeks), so forgive me for the silly question. But I'm trying to code a block sliding 3d game in unity, where the obstacles randomly generate, forever, the random generation is fine, but for my life, i cannot figure out how to get the obstacles to spawn at random Z positions. i"m currently stuck with a CS0117 error, so i cant test my latest attempt to fix this.
here's my current code:
using UnityEngine;
public class BlockSpawner : MonoBehaviour
{
public Transform[] SpawnPoints;
public GameObject blockPrefab;
public float timeBetweenWaves = 1f;
private float timeToSpawn = 2f;
public class Random { }
void Update()
{
if (Time.time >= timeToSpawn)
{
SpawnBlocks();
timeToSpawn = Time.time + timeBetweenWaves;
}
}
void SpawnBlocks()
{
//Spawning blocks in
int randomIndex = Random.Range(0, SpawnPoints.Length);
{
for (int i = 0; i < SpawnPoints.Length; i++)
if (randomIndex != i)
{
// generate random position
var viewRange = this.Size - SpawnPoints.Size;
Random random = new Random();
var left = random.Next(0, viewRange.Width);
var top = random.Next(0, viewRange.Height);
// set the random position
SpawnPoints.Location = new Point(left, top);
}
}
}
And Heres is my last working Code (note this has no random location code attempts):
using UnityEngine;
public class BlockSpawner : MonoBehaviour
{
public Transform[] SpawnPoints;
public GameObject blockPrefab;
public float timeBetweenWaves = 1f;
private float timeToSpawn = 2f;
void Update()
{
if (Time.time >= timeToSpawn)
{
SpawnBlocks();
timeToSpawn = Time.time + timeBetweenWaves;
}
}
void SpawnBlocks()
{
//Spawning blocks in
int randomIndex = Random.Range(0, SpawnPoints.Length);
{
for (int i = 0; i < SpawnPoints.Length; i++)
if (randomIndex != i)
{
Instantiate(blockPrefab, SpawnPoints[i].position, Quaternion.identity);
}
}
}
}
Please help!
I will guess at the error and say this is wrong
var viewRange = this.Size - SpawnPoints.Size;
var left = random.Next(0, viewRange.Width);
var top = random.Next(0, viewRange.Height);
// set the random position
SpawnPoints.Location = new Point(left, top);
Random random = new Random();
you are calling methods on 'random' before you have set it up. YOu need
var viewRange = this.Size - SpawnPoints.Size;
Random random = new Random();
var left = random.Next(0, viewRange.Width);
var top = random.Next(0, viewRange.Height);
// set the random position
SpawnPoints.Location = new Point(left, top);
But I will point out 2 things
unity has given you its random number generator, and you already used it (Random.Range) but for some reason you decide to use the c# standard one
creating a new Random each time round that loop will keep generating the same numbers over and over again if called in rapid succession better is to create the Random once at the start of the function (or use the Unity one)
To Generate Obstacles At Random Z position.
Am I keeping it much Simple!.
To Do this we have to keep these things in mind:
1: The X and Y Coordinates are not going to change.
2: The only Z coordinates are going to change.
Solution:
Generate a new Vector3 that will contains same X and Y coordinates but random Z coordinates.
Code:
// the game object (mean obstacle to generate)
public GameObject ObstaclePrefab;
// this will be the coordinates that will not going to change (X and Y coordinates as they will be same for all obstacles but only the Z coordinates will change.)
public Vector2 ConstantCords = new Vector2(0, 0);
// the max and minimum range for obstacles to be generated (X represents the negative [minimum] while y will represent the positive [maximum] position (Cordinates) )
public Vector2 MaxZCords = new Vector2(-50f, 50f);
// this method will generate the obstacle at random position
public void GenerateRandomObstacle()
{
// getting a random Z coordinates
// Random.Range will give random values
float Z = Random.Range(MaxZCords.x, MaxZCords.Y);
// getting position to generate the obstacle
// putting the constant and the random Z value
Vector3 pos = new Vector3(ConstantCords.x, ConstantCords.y, Z);
// generating the obstacle at the generated random position
// we are setting the rotation to rotation of the prefab
Instantiate(ObstaclePerfab, pos, ObstaclePrefab.transfrom.rotation);
}
I'm trying to implement bounce physics on a ball in my game using MonoGame c#. I've googled plenty but I'm unable to understand how to do this.
The circle should be able to hit any of the red lines and bounce realistically (not just invert the velocity).
I'm using this code to detect collision:
public bool IntersectCircle(Vector2 pos, float radius, out Vector2 circleWhenHit)
{
circleWhenHit = default;
// find the closest point on the line segment to the center of the circle
var line = End - Start;
var lineLength = line.Length();
var lineNorm = (1 / lineLength) * line;
var segmentToCircle = pos - Start;
var closestPointOnSegment = Vector2.Dot(segmentToCircle, line) / lineLength;
// Special cases where the closest point happens to be the end points
Vector2 closest;
if (closestPointOnSegment < 0) closest = Start;
else if (closestPointOnSegment > lineLength) closest = End;
else closest = Start + closestPointOnSegment * lineNorm;
// Find that distance. If it is less than the radius, then we
// are within the circle
var distanceFromClosest = pos - closest;
var distanceFromClosestLength = distanceFromClosest.Length();
if (distanceFromClosestLength > radius)
return false;
// So find the distance that places the intersection point right at
// the radius. This is the center of the circle at the time of collision
// and is different than the result from Doswa
var offset = (radius - distanceFromClosestLength) * ((1 / distanceFromClosestLength) * distanceFromClosest);
circleWhenHit = pos - offset;
return true;
}
And this code when the ball wants to change position:
private void GameBall_OnPositionChange(object sender, GameBallPositionChangedEventArgs e)
{
foreach(var boundary in mapBounds)
{
if (boundary.IntersectCircle(e.TargetPosition, gameBall.Radius, out Vector2 colVector))
{
var normalizedVelocity = Vector2.Normalize(e.Velocity);
var velo = e.Velocity.Length();
var surfaceNormal = Vector2.Normalize(colVector - e.CurrentPosition);
e.Velocity = Vector2.Reflect(normalizedVelocity, surfaceNormal) * velo;
e.TargetPosition = e.CurrentPosition;
break;
}
}
}
This code gives a decent result but I'm not using my boundary positions to calculate an angle.
How do I proceed to take those into account?
EDIT:
I've removed the event based update. I've added collision between players and the ball. This is now my map-update method:
foreach (var entity in circleGameEntities)
{
for (int i = 0; i < interpolatePos; i++)
{
entity.UpdatePosition(gameTime, interpolatePos);
var intersectingBoundaries = mapBounds
.Where(b =>
{
var intersects = b.IntersectCircle(entity.Position, entity.Radius, 0f, out _);
if (intersects)
averageNormal += b.Normal;
return intersects;
}).ToList();
if (intersectingBoundaries.Count > 0)
{
averageNormal.Normalize();
var normalizedVelocity = Vector2.Normalize(entity.Velocity); // Normalisera hastigheten
var velo = entity.Velocity.Length();
entity.Velocity = Vector2.Reflect(normalizedVelocity, averageNormal) * velo * entity.Bounciness;
entity.UpdatePosition(gameTime, interpolatePos);
}
foreach (var otherEntity in circleGameEntities.Where(e => e != entity))
{
if (entity.CollidesWithCircle(otherEntity, out Vector2 d))
{
Vector2 CMVelocity = (otherEntity.Mass * otherEntity.Velocity + entity.Mass * entity.Velocity) / (otherEntity.Mass + entity.Mass);
var otherEntityNorm = otherEntity.Position - entity.Position;
otherEntityNorm.Normalize();
var entityNorm = -otherEntityNorm;
var myVelocity = entity.Velocity;
myVelocity -= CMVelocity;
myVelocity = Vector2.Reflect(myVelocity, otherEntityNorm);
myVelocity += CMVelocity;
entity.Velocity = myVelocity;
entity.UpdatePosition(gameTime, interpolatePos);
var otherEntityVelocity = otherEntity.Velocity;
otherEntityVelocity -= CMVelocity;
otherEntityVelocity = Vector2.Reflect(otherEntityVelocity, entityNorm);
otherEntityVelocity += CMVelocity;
otherEntity.Velocity = otherEntityVelocity;
otherEntity.UpdatePosition(gameTime, interpolatePos);
}
}
}
entity.UpdateDrag(gameTime);
entity.Update(gameTime);
}
This code works quite well but sometimes the objects get stuck inside the walls and eachother.
CircleGameEntity class:
class CircleGameEntity : GameEntity
{
internal float Drag { get; set; } = .9999f;
internal float Radius => Scale * (Texture.Width + Texture.Height) / 4;
internal float Bounciness { get; set; } = 1f;
internal float Mass => BaseMass * Scale;
internal float BaseMass { get; set; }
internal Vector2 Velocity { get; set; }
internal float MaxVelocity { get; set; } = 10;
internal void UpdatePosition(GameTime gameTime, int interpolate)
{
var velocity = Velocity;
if (velocity.X < 0 && velocity.X < -MaxVelocity)
velocity.X = -MaxVelocity;
else if (velocity.X > 0 && velocity.X > MaxVelocity)
velocity.X = MaxVelocity;
if (velocity.Y < 0 && velocity.Y < -MaxVelocity)
velocity.Y = -MaxVelocity;
else if (velocity.Y > 0 && velocity.Y > MaxVelocity)
velocity.Y = MaxVelocity;
Velocity = velocity;
Position += Velocity / interpolate;
}
internal void UpdateDrag(GameTime gameTime)
{
Velocity *= Drag;
}
internal bool CollidesWithCircle(CircleGameEntity otherCircle, out Vector2 depth)
{
var a = Position;
var b = otherCircle.Position;
depth = Vector2.Zero;
float distance = Vector2.Distance(a, b);
if (Radius + otherCircle.Radius > distance)
{
float result = (Radius + otherCircle.Radius) - distance;
depth.X = (float)Math.Cos(result);
depth.Y = (float)Math.Sin(result);
}
return depth != Vector2.Zero;
}
}
The surfaceNormal is not the boundary normal, but the angle between the collision point and the center of the circle. This vector takes the roundness of the ball into account and is the negative of the ball's direction(if normalized) as if it hit the surface head-on, which is not needed unless the other surface is curved.
In the Boundary class calculate the angle and one of the normals in the constructor and store them as public readonly:
public readonly Vector2 Angle; // replaces lineNorm for disabiguity
public readonly Vector2 Normal;
public readonly Vector2 Length;
public Boundary(... , bool inside) // inside determines which normal faces the center
{
// ... existing constructor code
var line = End - Start;
Length = line.Length();
Angle = (1 / Length) * line;
Normal = new Vector2(-Angle.Y,Angle.X);
if (inside) Normal *= -1;
}
public bool IntersectCircle(Vector2 pos, float radius)
{
// find the closest point on the line segment to the center of the circle
var segmentToCircle = pos - Start;
var closestPointOnSegment = Vector2.Dot(segmentToCircle, End - Start) / Length;
// Special cases where the closest point happens to be the end points
Vector2 closest;
if (closestPointOnSegment < 0) closest = Start;
else if (closestPointOnSegment > Length) closest = End;
else closest = Start + closestPointOnSegment * Angle;
// Find that distance. If it is less than the radius, then we
// are within the circle
var distanceFromClosest = pos - closest;
return (distanceFromClosest.LengthSquared() > radius * radius); //the multiply is faster than square root
}
The change position code subset:
// ...
var normalizedVelocity = Vector2.Normalize(e.Velocity);
var velo = e.Velocity.Length();
e.Velocity = Vector2.Reflect(normalizedVelocity, boundary.Normal) * velo;
//Depending on timing and movement code, you may need add the next line to resolve the collision during the current step.
e.CurrentPosition += e.Velocity;
//...
This updated code assumes a single-sided non-moving boundary line as prescribed by the inside variable.
I am not a big fan of C# events in games, since it adds layers of delay (both, internally to C# and during proper use the cast of sender.)
I would be remiss not to mention your abuse of the e variable. e should always be treated as a value type: i.e. read-only. The sender variable should be cast(slow) and used for writing purposes.
I have two files (NewComputeShader.compute and ShaderRun.cs). ShaderRun.cs runs shader and draws it's texture on a camera (script is camera's component)
On start, unity draws one white pixel in a bottom-left corner.
(Twidth = 256, Theight = 256, Agentsnum = 10)
NewComputeShader.compute:
// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSUpdate
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> Result;
uint width = 256;
uint height = 256;
int numAgents = 10;
float moveSpeed = 100;
uint PI = 3.1415926535;
float DeltaTime = 1;
uint hash(uint state) {
state ^= 2747636419u;
state *= 2654435769u;
state ^= state >> 16;
state *= 2654435769u;
state ^= state >> 16;
state *= 2654435769u;
return state;
}
uint scaleToRange01(uint state) {
state /= 4294967295.0;
return state;
}
struct Agent {
float2 position;
float angle;
};
RWStructuredBuffer<Agent> agents;
[numthreads(8,8,1)]
void CSUpdate(uint3 id : SV_DispatchThreadID)
{
//if (id.x >= numAgents) { return; }
Agent agent = agents[id.x];
uint random = hash(agent.position.y * width + agent.position.x + hash(id.x));
float2 direction = float2(cos(agent.angle), sin(agent.angle));
float2 newPos = agent.position + direction * moveSpeed * DeltaTime;
if (newPos.x < 0 || newPos.x >= width || newPos.y < 0 || newPos.y >= height) {
newPos.x = min(width - 0.01, max(0, newPos.x));
newPos.y = min(height - 0.01, max(0, newPos.y));
agents[id.x].angle = scaleToRange01(random) * 2 * PI;
}
agents[id.x].position = newPos;
Result[int2(newPos.x, newPos.y)] = float4(1,1,1,1);
}
ShaderRun.cs:
using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class ShaderRun : MonoBehaviour
{
public ComputeShader computeShader;
public RenderTexture renderTexture;
public int twidth;
public int theight;
public int agentsnum;
ComputeBuffer agentsBuffer;
struct MyAgent
{
public Vector2 position;
public float angle;
};
// Start is called before the first frame update
void Start()
{
renderTexture = new RenderTexture(twidth, theight, 24);
renderTexture.enableRandomWrite = true;
renderTexture.Create();
computeShader.SetTexture(0, "Result", renderTexture);
agentsBuffer = new ComputeBuffer(agentsnum, sizeof(float)*3); //make new compute buffer with specified size, and specified "stride" //stride is like the size of each element, in your case it would be 3 floats, since Vector3 is 3 floats.
ResetAgents();
computeShader.SetBuffer(0, "agents", agentsBuffer); //Linking the compute shader and cs shader buffers
computeShader.Dispatch(0, renderTexture.width / 8, renderTexture.height / 8, 1);
}
void OnRenderImage(RenderTexture src, RenderTexture dest)
{
Graphics.Blit(renderTexture, dest);
}
private void ResetAgents()
{
MyAgent[] aArray = new MyAgent[agentsnum];
for (int i=0; i<agentsnum; i++)
{
MyAgent a = new MyAgent();
a.position = new Vector2(128, 128);
a.angle = 2 * (float)Math.PI * (i / agentsnum);
aArray[i] = a;
}
agentsBuffer.SetData(aArray);
ComputeStepFrame();
}
private void ComputeStepFrame()
{
computeShader.SetFloat("DeltaTime", Time.deltaTime);
int kernelHandle = computeShader.FindKernel("CSUpdate");
computeShader.SetBuffer(kernelHandle, "agents", agentsBuffer);
computeShader.Dispatch(0, renderTexture.width / 8, renderTexture.height / 8, 1);
}
// Update is called once per frame
void Update()
{
ComputeStepFrame();
}
}
Also this is an attempt of recreating this code: https://www.youtube.com/watch?v=X-iSQQgOd1A&t=730s (part: Side-tracked by Slime). Result must be like on a first demonstration of agents in video.
Edit: I really recommend to check this video. It is very good!
I'm doing the same. To start the scaleToRange01 function should probably return a float. As for location you might want to look at the C# side, how are you initializing agents and getting that data into the buffer? Need to create a similar struct in C# then assign it something like below.
int totalSize = (sizeof(float) * 2) + (sizeof(float));
agentBuffer = new ComputeBuffer(agents.Length, totalSize);
agentBuffer.SetData(agents);
computeShader.SetBuffer(0, "agents", agentBuffer);
I am also attempting to recreate this. The problem is Sebastian leaves out his c# code and some of his HLSL so it's hard to put together the pieces that aren't there. I worked nonstop all day yesterday and finally got it to perform demonstration 2. The most difficult thing for me is getting the threading correctly and having the GPU compute all the items I need it to. I am dreading starting the trail dissipation and trail sense but honestly, it felt great getting to demo 2 and it's what is pushing me to keep at it. Everything is very touchy with this project and it is not for the casual programmer. (Also learn a bit about HLSL if you haven't.) Another thing is I don't use his random angle generator I just created my own. I know this doesn't help but just know other people are attempting to struggle through this also. Sebastian is a genius.
Found this question after so much time. But theme is still interesting. Maybe my answer will help passersby here later.
BTW look at this video.
Slime is a game life form now!
The problem with the code from original question is ambiguity with what are you going to process with kernel.
[numthreads(8,8,1)]
void CSUpdate(uint3 id : SV_DispatchThreadID)
{
//if (id.x >= numAgents) { return; }
Agent agent = agents[id.x];
//etc...
}
In this kernel you are intended to process 1D array of agents. You need to dispatch this code like this:
shader.Dispatch(kernelUpdateAgents,
Mathf.CeilToInt(numAgents / (float) xKernelThreadGroupSize),
1,
1);
And of course you need to correct kernel TGS:
[numthreads(8,1,1)]
void CSUpdate(uint3 id : SV_DispatchThreadID)
For 2D texture you need to keep kernel like
[numthreads(8,8,1)]
void ProcessTexture(uint3 id : SV_DispatchThreadID)
{
//some work
}
And only then it is okay to dispatch it with second dimension:
shader.Dispatch(kernelProcessTexture,
Mathf.CeilToInt(TextureWidth / (float) xKernelThreadGroupSize),
Mathf.CeilToInt(TextureHeight / (float) yKernelThreadGroupSize),
1);
P.S. And there is a link to github repo under video.
My algorithm for calculating which block a player is looking at (voxel based world) is not working correctly. I have adapted it from this tutorial from 2D to 3D. At times it shows the correct block but other times it either returns nothing when it should or something in a completely different direction, why is this happening?
public (Block, Box?) GetLookAtBlock(Vector3 pos, Vector3 look) {
try {
look = look.Normalized() * 4;
float deltaX = Math.Abs(look.Normalized().X);
float deltaY = Math.Abs(look.Normalized().Y);
float deltaZ = Math.Abs(look.Normalized().Z);
int stepX, stepY, stepZ;
float distX, distY, distZ;
if (look.X < 0) {
distX = (pos.X - SandboxMath.RoundDown(pos.X)) * deltaX;
stepX = -1;
} else {
distX = (SandboxMath.RoundDown(pos.X) + 1 - pos.X) * deltaX;
stepX = 1;
}
if (look.Y < 0) {
distY = (pos.Y - SandboxMath.RoundDown(pos.Y)) * deltaY;
stepY = -1;
} else {
distY = (SandboxMath.RoundDown(pos.Y) + 1 - pos.Y) * deltaY;
stepY = 1;
}
if (look.Z < 0) {
distZ = (pos.Z - SandboxMath.RoundDown(pos.Z)) * deltaZ;
stepZ = -1;
} else {
distZ = (SandboxMath.RoundDown(pos.Z) + 1 - pos.Z) * deltaZ;
stepZ = 1;
}
int endX = SandboxMath.RoundDown(pos.X + look.X);
int endY = SandboxMath.RoundDown(pos.Y + look.Y);
int endZ = SandboxMath.RoundDown(pos.Z + look.Z);
int x = (int)pos.X;
int y = (int)pos.Y;
int z = (int)pos.Z;
Block start = GetBlock(x, y, z);
if (start != 0) {
return (start, new Box(new Vector3(x, y, z), new Vector3(x + 1, y + 1, z + 1)));
}
while (x != endX && y != endY && z != endZ) {
if (distX < distY) {
if (distX < distZ) {
distX += deltaX;
x += stepX;
} else {
distZ += deltaZ;
z += stepZ;
}
} else {
if (distY < distZ) {
distY += deltaY;
y += stepY;
} else {
distZ += deltaZ;
z += stepZ;
}
}
Block b = GetBlock(x, y, z);
if (b != 0) {
return (b, new Box(new Vector3(x, y, z), new Vector3(x + 1, y + 1, z + 1)));
}
}
return (0, null);
} catch (IndexOutOfRangeException) {
return (0, null);
}
}
your DDA have two issues I can see from the first look:
work only if Z is the major axis
so only if you are in camera space or have fixed camera looking in Z direction
your deltas are weird
why:
delta? = abs(1 / look.Normalized().?);
I would expect:
delta? = abs(look.Normalized().?);
I do not code in C# so I am not confident to repair your code however here is my C++ template for n-dimensional DDA so just compare and repair yours according it ...
template<const int n>class DDA
{
public:
int p0[n],p1[n],p[n];
int d[n],s[n],c[n],ix;
DDA(){};
DDA(DDA& a) { *this=a; }
~DDA(){};
DDA* operator = (const DDA *a) { *this=*a; return this; }
//DDA* operator = (const DDA &a) { ..copy... return this; }
void start()
{
int i;
for (ix=0,i=0;i<n;i++)
{
p[i]=p0[i]; s[i]= 0; d[i]=p1[i]-p0[i];
if (d[i]>0) s[i]=+1;
if (d[i]<0){ s[i]=-1; d[i]=-d[i]; }
if (d[ix]<d[i]) ix=i;
}
for (i=0;i<n;i++) c[i]=d[ix];
}
void start(double *fp0) // this will add the subpixel offset according to first point as double
{
int i; start();
for (i=0;i<n;i++)
{
if (s[i]<0) c[i]=double(double(d[ix])*( fp0[i]-floor(fp0[i])));
if (s[i]>0) c[i]=double(double(d[ix])*(1.0-fp0[i]+floor(fp0[i])));
}
}
bool update()
{
int i;
for (i=0;i<n;i++){ c[i]-=d[i]; if (c[i]<=0){ c[i]+=d[ix]; p[i]+=s[i]; }}
return (p[ix]!=p1[ix]+s[ix]);
}
};
start() init the variables and position for the DDA (from p0,p1 control points) and the update() is just single step of the DDA ... Resulting iterated point is in p
s is the step, d is delta, c is counter and ix is major axis index.
The usage is like this:
DDA<3> A; // 3D
A.p0[...]=...; // set start point
A.p1[...]=...; // set end point
for (A.start();A.update();)
{
A.p[...]; // here use the iterated point
}
DDA go through
well DDA is just interpolation (rasterization) of integer positions on some line between two endpoints (p0,p1). The line equation is like this:
p(t) = p0 + t*(p1-p0);
t = <0.0,1.0>
however that involves floating math and we want integer so we can rewrite to something like this:
dp = p1-p0
D = max (|dp.x|,|dp.y|,|dp.z|,...)
p.x(i) = p0.x + (dp.x*i)/D
p.y(i) = p0.y + (dp.y*i)/D
p.z(i) = p0.z + (dp.z*i)/D
...
i = { 0,1,...D }
where i,D is matching the major axis (the one with biggest change). If you look closer we are using *i/D Which is slow operation and we usually want speed so we can rewrite the therm (exploiting the fact that i goes from 0 to D in order) into something like this (for x axis only the others will be the same with different indexes ...):
p.x=p0.x; // start position
s.x=0; d.x=p1.x-p0.x; // step and abs delta
if (d.x>0) s.x=+1;
if (d.x<0){ s.x=-1; d.x=-d.x; }
D = max(d.x,d.y,d.z,...); // major axis abs delta
c.x=D; // counter for the iteration
for (i=0;i<D;i++)
{
c.x-=d.x; // update counter with axis abs delta
if (c.x<=0) // counter overflowed?
{
c.x+=D; // update counter with major axis abs delta
p.x+=s.x; // update axis by step
}
}
Now take a closer look at the counter c which we are adding the D and substracting d.x which is the i/D rewrited into D iterations. All the other axises are computed in the same manner you just need to add counter c step s and abs delta d for each axis ...
btw if it helps look at this:
volumetric GLSL back ray tracer
which is (I assume) what you are doing but implemented in GLSL shader (see the fragment code) however it does not use DDA instead it is adding unit direction vector to initial position until hit something or end of voxel space ...
btw its based on:
Ray Casting with different height size
just like the link of yours.
[Edit] wrong hits (guessed from your comments)
That has most likely nothing to do with DDA. Its more like edge case when you are standing directly on cell crossing or have wrongly truncated position or have wrongly z-sorted the hits. I remember I was having trouble with it. I ended up with very weird solution in GLSL see the Link above and see the fragment code. Look for
// YZ plane voxels hits
its directly after the non "DDA" ray casting code. It detects which plane of the voxel will be hit I think you should do something similar. It was weird as in 2D DOOM (also in links above) I had no such problems... but that was due to fact they were solved by different math used (suitable only for 2D).
The GLSL code just before the casting of ray changes a position a bit to avoid edge cases. pay attention to the floor and ceil but mine works on floats so it would need some tweaking for int math. Luckily I was repairing my other ray casting engine based on this:
Comanche Voxel space ray casting
And the solution is to offset the DDA by subpixel start position of the ray. I updated the DDA code above the new usage is:
DDA<3> A; // 3D
A.p0[...]=...; // set start point
A.p1[...]=...; // set end point
for (A.start(start_point_as_double[3]);A.update();)
{
A.p[...]; // here use the iterated point
}
Also on second taught make sure that in your DDA the c,d,s are integers if they are floating instead then it might cause the problems you are describing too...
Unity has a function Terrain.sampleHeight(point) which is great, it instantly gives you the height of the Terrain underfoot rather than having to cast.
However, any non-trivial project has more than one Terrain. (Indeed any physically large scene inevitably features terrain stitching, one way or another.)
Unity has a function Terrain.activeTerrain which - I'm not making this up - gives you: the "first one loaded"
Obviously that is completely useless.
Is fact, is there a fast way to get the Terrain "under you"? You can then use the fast function .sampleHeight ?
{Please note, of course, you could ... cast to find a Terrain under you! But you would then have your altitude so there's no need to worry about .sampleHeight !}
In short is there a matching function to use with sampleHeight which lets that function know which Terrain to use for a given xyz?
(Or indeed, is sampleHeight just a fairly useless demo function, usable only in demos with one Terrain?)
Is there in fact a fast way to get the Terrain "under you" - so as to
then use the fast function .sampleHeight ?
Yes, it can be done.
(Or indeed, is sampleHeight just a fairly useless demo function,
usable only in demos with one Terrain?)
No
There is Terrain.activeTerrain which returns the main terrain in the scene. There is also Terrain.activeTerrains (notice the "s" at the end) which returns active terrains in the scene.
Obtain the terrains with Terrain.activeTerrains which returns Terrain array then use Terrain.GetPosition function to obtain its position. Get the current terrain by finding the closest terrain from the player's position. You can do this by sorting the terrain position, using Vector3.Distance or Vector3.sqrMagnitude (faster).
Terrain GetClosestCurrentTerrain(Vector3 playerPos)
{
//Get all terrain
Terrain[] terrains = Terrain.activeTerrains;
//Make sure that terrains length is ok
if (terrains.Length == 0)
return null;
//If just one, return that one terrain
if (terrains.Length == 1)
return terrains[0];
//Get the closest one to the player
float lowDist = (terrains[0].GetPosition() - playerPos).sqrMagnitude;
var terrainIndex = 0;
for (int i = 1; i < terrains.Length; i++)
{
Terrain terrain = terrains[i];
Vector3 terrainPos = terrain.GetPosition();
//Find the distance and check if it is lower than the last one then store it
var dist = (terrainPos - playerPos).sqrMagnitude;
if (dist < lowDist)
{
lowDist = dist;
terrainIndex = i;
}
}
return terrains[terrainIndex];
}
USAGE:
Assuming that the player's position is transform.position:
//Get the current terrain
Terrain terrain = GetClosestCurrentTerrain(transform.position);
Vector3 point = new Vector3(0, 0, 0);
//Can now use SampleHeight
float yHeight = terrain.SampleHeight(point);
While it's possible to do it with Terrain.SampleHeight, this can be simplified with a simple raycast from the player's position down to the Terrain.
Vector3 SampleHeightWithRaycast(Vector3 playerPos)
{
float groundDistOffset = 2f;
RaycastHit hit;
//Raycast down to terrain
if (Physics.Raycast(playerPos, -Vector3.up, out hit))
{
//Get y position
playerPos.y = (hit.point + Vector3.up * groundDistOffset).y;
}
return playerPos;
}
Terrain.GetPosition() = Terrain.transform.position = position in world
working method:
Terrain[] _terrains = Terrain.activeTerrains;
int GetClosestCurrentTerrain(Vector3 playerPos)
{
//Get the closest one to the player
var center = new Vector3(_terrains[0].transform.position.x + _terrains[0].terrainData.size.x / 2, playerPos.y, _terrains[0].transform.position.z + _terrains[0].terrainData.size.z / 2);
float lowDist = (center - playerPos).sqrMagnitude;
var terrainIndex = 0;
for (int i = 0; i < _terrains.Length; i++)
{
center = new Vector3(_terrains[i].transform.position.x + _terrains[i].terrainData.size.x / 2, playerPos.y, _terrains[i].transform.position.z + _terrains[i].terrainData.size.z / 2);
//Find the distance and check if it is lower than the last one then store it
var dist = (center - playerPos).sqrMagnitude;
if (dist < lowDist)
{
lowDist = dist;
terrainIndex = i;
}
}
return terrainIndex;
}
It turns out the answer is simply NO, Unity does not provide such a function.
You can use this function to get the Closest Terrain to your current Position:
int GetClosestTerrain(Vector3 CheckPos)
{
int terrainIndex = 0;
float lowDist = float.MaxValue;
for (int i = 0; i < _terrains.Length; i++)
{
var center = new Vector3(_terrains[i].transform.position.x + _terrains[i].terrainData.size.x / 2, CheckPos.y, _terrains[i].transform.position.z + _terrains[i].terrainData.size.z / 2);
float dist = Vector3.Distance(center, CheckPos);
if (dist < lowDist)
{
lowDist = dist;
terrainIndex = i;
}
}
return terrainIndex;
}
and then you can use the function like this:
private Terrain[] _terrains;
void Start()
{
_terrains = Terrain.activeTerrains;
Vector3 start_pos = Vector3.zero;
start_pos.y = _terrains[GetClosestTerrain(start_pos)].SampleHeight(start_pos);
}
public static Terrain GetClosestTerrain(Vector3 position)
{
return Terrain.activeTerrains.OrderBy(x =>
{
var terrainPosition = x.transform.position;
var terrainSize = x.terrainData.size * 0.5f;
var terrainCenter = new Vector3(terrainPosition.x + terrainSize.x, position.y, terrainPosition.z + terrainSize.z);
return Vector3.Distance(terrainCenter, position);
}).First();
}
Raycast solution: (this was not asked, but for those looking for Solution using Raycast)
Raycast down from Player, ignore everything that has not Layer of "Terrain" (Layer can be easily set in inspector).
Code:
void Update() {
// Put this on Player! Raycast's down (raylength=10f), if we hit something, check if the Layers name is "Terrain", if yes, return its instanceID
RaycastHit hit;
if (Physics.Raycast (transform.localPosition, transform.TransformDirection (Vector3.down), out hit, 10f, 1 << LayerMask.NameToLayer("Terrain"))) {
Debug.Log(hit.transform.gameObject.GetInstanceID());
}
}
At this point already, you have a reference to the Terrain by "hit.transform.gameObject".
For my case, i wanted to reference this terrain by its instanceID:
// any other script
public static UnityEngine.Object FindObjectFromInstanceID(int goID) {
return (UnityEngine.Object)typeof(UnityEngine.Object)
.GetMethod("FindObjectFromInstanceID", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Static)
.Invoke(null, new object[] { goID });
}
But as written above, if you want the Terrain itself (as Terrain object) and not the instanceID, then "hit.transform.gameObject" will give you the reference already.
Input and code snippets taken from these links:
https://answers.unity.com/questions/1164722/raycast-ignore-layers-except.html
https://answers.unity.com/questions/34929/how-to-find-object-using-instance-id-taken-from-ge.html