Compute shader isn't updating - c#

I have two files (NewComputeShader.compute and ShaderRun.cs). ShaderRun.cs runs shader and draws it's texture on a camera (script is camera's component)
On start, unity draws one white pixel in a bottom-left corner.
(Twidth = 256, Theight = 256, Agentsnum = 10)
NewComputeShader.compute:
// Each #kernel tells which function to compile; you can have many kernels
#pragma kernel CSUpdate
// Create a RenderTexture with enableRandomWrite flag and set it
// with cs.SetTexture
RWTexture2D<float4> Result;
uint width = 256;
uint height = 256;
int numAgents = 10;
float moveSpeed = 100;
uint PI = 3.1415926535;
float DeltaTime = 1;
uint hash(uint state) {
state ^= 2747636419u;
state *= 2654435769u;
state ^= state >> 16;
state *= 2654435769u;
state ^= state >> 16;
state *= 2654435769u;
return state;
}
uint scaleToRange01(uint state) {
state /= 4294967295.0;
return state;
}
struct Agent {
float2 position;
float angle;
};
RWStructuredBuffer<Agent> agents;
[numthreads(8,8,1)]
void CSUpdate(uint3 id : SV_DispatchThreadID)
{
//if (id.x >= numAgents) { return; }
Agent agent = agents[id.x];
uint random = hash(agent.position.y * width + agent.position.x + hash(id.x));
float2 direction = float2(cos(agent.angle), sin(agent.angle));
float2 newPos = agent.position + direction * moveSpeed * DeltaTime;
if (newPos.x < 0 || newPos.x >= width || newPos.y < 0 || newPos.y >= height) {
newPos.x = min(width - 0.01, max(0, newPos.x));
newPos.y = min(height - 0.01, max(0, newPos.y));
agents[id.x].angle = scaleToRange01(random) * 2 * PI;
}
agents[id.x].position = newPos;
Result[int2(newPos.x, newPos.y)] = float4(1,1,1,1);
}
ShaderRun.cs:
using System;
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
public class ShaderRun : MonoBehaviour
{
public ComputeShader computeShader;
public RenderTexture renderTexture;
public int twidth;
public int theight;
public int agentsnum;
ComputeBuffer agentsBuffer;
struct MyAgent
{
public Vector2 position;
public float angle;
};
// Start is called before the first frame update
void Start()
{
renderTexture = new RenderTexture(twidth, theight, 24);
renderTexture.enableRandomWrite = true;
renderTexture.Create();
computeShader.SetTexture(0, "Result", renderTexture);
agentsBuffer = new ComputeBuffer(agentsnum, sizeof(float)*3); //make new compute buffer with specified size, and specified "stride" //stride is like the size of each element, in your case it would be 3 floats, since Vector3 is 3 floats.
ResetAgents();
computeShader.SetBuffer(0, "agents", agentsBuffer); //Linking the compute shader and cs shader buffers
computeShader.Dispatch(0, renderTexture.width / 8, renderTexture.height / 8, 1);
}
void OnRenderImage(RenderTexture src, RenderTexture dest)
{
Graphics.Blit(renderTexture, dest);
}
private void ResetAgents()
{
MyAgent[] aArray = new MyAgent[agentsnum];
for (int i=0; i<agentsnum; i++)
{
MyAgent a = new MyAgent();
a.position = new Vector2(128, 128);
a.angle = 2 * (float)Math.PI * (i / agentsnum);
aArray[i] = a;
}
agentsBuffer.SetData(aArray);
ComputeStepFrame();
}
private void ComputeStepFrame()
{
computeShader.SetFloat("DeltaTime", Time.deltaTime);
int kernelHandle = computeShader.FindKernel("CSUpdate");
computeShader.SetBuffer(kernelHandle, "agents", agentsBuffer);
computeShader.Dispatch(0, renderTexture.width / 8, renderTexture.height / 8, 1);
}
// Update is called once per frame
void Update()
{
ComputeStepFrame();
}
}
Also this is an attempt of recreating this code: https://www.youtube.com/watch?v=X-iSQQgOd1A&t=730s (part: Side-tracked by Slime). Result must be like on a first demonstration of agents in video.
Edit: I really recommend to check this video. It is very good!

I'm doing the same. To start the scaleToRange01 function should probably return a float. As for location you might want to look at the C# side, how are you initializing agents and getting that data into the buffer? Need to create a similar struct in C# then assign it something like below.
int totalSize = (sizeof(float) * 2) + (sizeof(float));
agentBuffer = new ComputeBuffer(agents.Length, totalSize);
agentBuffer.SetData(agents);
computeShader.SetBuffer(0, "agents", agentBuffer);

I am also attempting to recreate this. The problem is Sebastian leaves out his c# code and some of his HLSL so it's hard to put together the pieces that aren't there. I worked nonstop all day yesterday and finally got it to perform demonstration 2. The most difficult thing for me is getting the threading correctly and having the GPU compute all the items I need it to. I am dreading starting the trail dissipation and trail sense but honestly, it felt great getting to demo 2 and it's what is pushing me to keep at it. Everything is very touchy with this project and it is not for the casual programmer. (Also learn a bit about HLSL if you haven't.) Another thing is I don't use his random angle generator I just created my own. I know this doesn't help but just know other people are attempting to struggle through this also. Sebastian is a genius.

Found this question after so much time. But theme is still interesting. Maybe my answer will help passersby here later.
BTW look at this video.
Slime is a game life form now!
The problem with the code from original question is ambiguity with what are you going to process with kernel.
[numthreads(8,8,1)]
void CSUpdate(uint3 id : SV_DispatchThreadID)
{
//if (id.x >= numAgents) { return; }
Agent agent = agents[id.x];
//etc...
}
In this kernel you are intended to process 1D array of agents. You need to dispatch this code like this:
shader.Dispatch(kernelUpdateAgents,
Mathf.CeilToInt(numAgents / (float) xKernelThreadGroupSize),
1,
1);
And of course you need to correct kernel TGS:
[numthreads(8,1,1)]
void CSUpdate(uint3 id : SV_DispatchThreadID)
For 2D texture you need to keep kernel like
[numthreads(8,8,1)]
void ProcessTexture(uint3 id : SV_DispatchThreadID)
{
//some work
}
And only then it is okay to dispatch it with second dimension:
shader.Dispatch(kernelProcessTexture,
Mathf.CeilToInt(TextureWidth / (float) xKernelThreadGroupSize),
Mathf.CeilToInt(TextureHeight / (float) yKernelThreadGroupSize),
1);
P.S. And there is a link to github repo under video.

Related

Using functions from the c# script inside the shader

I'm trying to write a fragment shader that will give a different color depending on the position. For this purpose, I wrote a script that returns the color from given vector3 and I want to call this function inside a shader. Is it possible at all?
My code:
using System.Collections.Generic;
using UnityEngine;
public class CustomLight : MonoBehaviour
{
public static List<CustomLight> lights = new List<CustomLight>();
[Min(0)]
public float intensity = 1;
public Color color = Color.white;
[Min(0)]
public float radius = 4;
[Range(0, 1)]
public float innerRadius = 0;
public Color GetLight(Vector3 point)
{
if (intensity <= 0 || radius <= 0) return Color.clear;
float value = 0;
float distanceSqr = (point - transform.position).sqrMagnitude;
if (distanceSqr >= radius * radius) return Color.clear;
if (innerRadius == 1) value = 1;
else
{
if (distanceSqr <= radius * radius * innerRadius * innerRadius) value = 1;
else value = Mathf.InverseLerp(radius, radius * innerRadius, Mathf.Sqrt(distanceSqr));
}
return color * intensity * value;
}
private void OnEnable()
{
if (!lights.Contains(this)) lights.Add(this);
}
private void OnDisable()
{
lights.Remove(this);
}
}
I haven't written any shader yet, because I don't even know where to start. I need the sum of results from all scripts on the scene, then multiply it by the color of the shader.
I apologize for poor English
C# functions run on the CPU while shaders run on the GPU, as such you can't call c# functions from a shader.
You can however access variables passed to the shaders through the materials via Material.SetX methods, which is likely the closest to what your trying to achieve.

Strange outputs for a moving platform in Unity

First off, sorry it this isn't written very well, I've spend hours debugging this and I'm very stressed. I'm trying to make a moving platform in unity that can move between way-points, I don't want to have to have tons of gameobjects in the world taking up valuable processing power though so I'm trying to use something I can just add to the script through the editor.
The only problem is that it seems to be doing this at an incredible speed:
Black = The Camera View, Blue = The platform and where it should be going based on waypoints, Red = What it is currently doing.
I've spend hours trying to find a fix but I have no idea why it's doing this.
My Script on the Platform:
public Vector3[] localWaypoints;
Vector3[] globalWaypoints;
public float speed;
public bool cyclic;
public float waitTime;
[Range(0, 2)]
public float easeAmount;
int fromWaypointIndex;
float percentBetweenWaypoints;
float nextMoveTime;
void Start()
{
globalWaypoints = new Vector3[localWaypoints.Length];
for (int i = 0; i < localWaypoints.Length; i++)
{
globalWaypoints[i] = localWaypoints[i] + transform.position;
}
}
void Update()
{
Vector3 velocity = CalculatePlatformMovement();
transform.Translate(velocity);
}
float Ease(float x)
{
float a = easeAmount + 1;
return Mathf.Pow(x, a) / (Mathf.Pow(x, a) + Mathf.Pow(1 - x, a));
}
Vector3 CalculatePlatformMovement()
{
if (Time.time < nextMoveTime)
{
return Vector3.zero;
}
fromWaypointIndex %= globalWaypoints.Length;
int toWaypointIndex = (fromWaypointIndex + 1) % globalWaypoints.Length;
float distanceBetweenWaypoints = Vector3.Distance(globalWaypoints[fromWaypointIndex], globalWaypoints[toWaypointIndex]);
percentBetweenWaypoints += Time.deltaTime * speed / distanceBetweenWaypoints;
percentBetweenWaypoints = Mathf.Clamp01(percentBetweenWaypoints);
float easedPercentBetweenWaypoints = Ease(percentBetweenWaypoints);
Vector3 newPos = Vector3.Lerp(globalWaypoints[fromWaypointIndex], globalWaypoints[toWaypointIndex], easedPercentBetweenWaypoints);
if (percentBetweenWaypoints >= 1)
{
percentBetweenWaypoints = 0;
fromWaypointIndex++;
if (!cyclic)
{
if (fromWaypointIndex >= globalWaypoints.Length - 1)
{
fromWaypointIndex = 0;
System.Array.Reverse(globalWaypoints);
}
}
nextMoveTime = Time.time + waitTime;
}
return newPos - transform.position;
}
struct PassengerMovement
{
public Transform transform;
public Vector3 velocity;
public bool standingOnPlatform;
public bool moveBeforePlatform;
public PassengerMovement(Transform _transform, Vector3 _velocity, bool _standingOnPlatform, bool _moveBeforePlatform)
{
transform = _transform;
velocity = _velocity;
standingOnPlatform = _standingOnPlatform;
moveBeforePlatform = _moveBeforePlatform;
}
}
void OnDrawGizmos()
{
if (localWaypoints != null)
{
Gizmos.color = Color.red;
float size = .3f;
for (int i = 0; i < localWaypoints.Length; i++)
{
Vector3 globalWaypointPos = (Application.isPlaying) ? globalWaypoints[i] : localWaypoints[i] + transform.position;
Gizmos.DrawLine(globalWaypointPos - Vector3.up * size, globalWaypointPos + Vector3.up * size);
Gizmos.DrawLine(globalWaypointPos - Vector3.left * size, globalWaypointPos + Vector3.left * size);
}
}
}
UPDATE: Upon further testing I found that if the first object in my localWaypoint array is set to 0,0,0 and my 2nd object is set to 1,0,0 then the platform will spiral to the right, making sure to hit the waypoints as it's spiraling, and then spiraling out into nowhere like in the image above. But if I set my first object to 0,0,0 and my second object to -1,0,0 then the object will act the same way as before, but will spiral to the left as displayed in this image. (The second image has also bee updated to display how the platfrom makes sure to hit both waypoints before is spirals out into nowhere).
I've also noticed that if I set both waypoints to 0,0,0 then the platform stays still, these 2 things prove that it has somthing to do with the way the waypoints are being handled and not some other script or parent object interfering.
Using the updated numbers ([0,0,0], [1,0,0]) works in my test app. However, if I put a rotation on the object's Y axis, then I see behavior like you are seeing. In Update, if you change:
transform.Translate(velocity);
to
transform.Translate(velocity, Space.World);
You should see your desired behavior. Note that "transform.Translate(velocity)" is the same as "transform.Translate(velocity, Space.Self)". Your translation is being rotated.
If you are curious, take a look at this for more information on how the values in the transform are applied:
https://gamedev.stackexchange.com/questions/138358/what-is-the-transformation-order-when-using-the-transform-class

How to fix my 3D Ray-Casting Algorithm for getting the Block the player is looking at

My algorithm for calculating which block a player is looking at (voxel based world) is not working correctly. I have adapted it from this tutorial from 2D to 3D. At times it shows the correct block but other times it either returns nothing when it should or something in a completely different direction, why is this happening?
public (Block, Box?) GetLookAtBlock(Vector3 pos, Vector3 look) {
try {
look = look.Normalized() * 4;
float deltaX = Math.Abs(look.Normalized().X);
float deltaY = Math.Abs(look.Normalized().Y);
float deltaZ = Math.Abs(look.Normalized().Z);
int stepX, stepY, stepZ;
float distX, distY, distZ;
if (look.X < 0) {
distX = (pos.X - SandboxMath.RoundDown(pos.X)) * deltaX;
stepX = -1;
} else {
distX = (SandboxMath.RoundDown(pos.X) + 1 - pos.X) * deltaX;
stepX = 1;
}
if (look.Y < 0) {
distY = (pos.Y - SandboxMath.RoundDown(pos.Y)) * deltaY;
stepY = -1;
} else {
distY = (SandboxMath.RoundDown(pos.Y) + 1 - pos.Y) * deltaY;
stepY = 1;
}
if (look.Z < 0) {
distZ = (pos.Z - SandboxMath.RoundDown(pos.Z)) * deltaZ;
stepZ = -1;
} else {
distZ = (SandboxMath.RoundDown(pos.Z) + 1 - pos.Z) * deltaZ;
stepZ = 1;
}
int endX = SandboxMath.RoundDown(pos.X + look.X);
int endY = SandboxMath.RoundDown(pos.Y + look.Y);
int endZ = SandboxMath.RoundDown(pos.Z + look.Z);
int x = (int)pos.X;
int y = (int)pos.Y;
int z = (int)pos.Z;
Block start = GetBlock(x, y, z);
if (start != 0) {
return (start, new Box(new Vector3(x, y, z), new Vector3(x + 1, y + 1, z + 1)));
}
while (x != endX && y != endY && z != endZ) {
if (distX < distY) {
if (distX < distZ) {
distX += deltaX;
x += stepX;
} else {
distZ += deltaZ;
z += stepZ;
}
} else {
if (distY < distZ) {
distY += deltaY;
y += stepY;
} else {
distZ += deltaZ;
z += stepZ;
}
}
Block b = GetBlock(x, y, z);
if (b != 0) {
return (b, new Box(new Vector3(x, y, z), new Vector3(x + 1, y + 1, z + 1)));
}
}
return (0, null);
} catch (IndexOutOfRangeException) {
return (0, null);
}
}
your DDA have two issues I can see from the first look:
work only if Z is the major axis
so only if you are in camera space or have fixed camera looking in Z direction
your deltas are weird
why:
delta? = abs(1 / look.Normalized().?);
I would expect:
delta? = abs(look.Normalized().?);
I do not code in C# so I am not confident to repair your code however here is my C++ template for n-dimensional DDA so just compare and repair yours according it ...
template<const int n>class DDA
{
public:
int p0[n],p1[n],p[n];
int d[n],s[n],c[n],ix;
DDA(){};
DDA(DDA& a) { *this=a; }
~DDA(){};
DDA* operator = (const DDA *a) { *this=*a; return this; }
//DDA* operator = (const DDA &a) { ..copy... return this; }
void start()
{
int i;
for (ix=0,i=0;i<n;i++)
{
p[i]=p0[i]; s[i]= 0; d[i]=p1[i]-p0[i];
if (d[i]>0) s[i]=+1;
if (d[i]<0){ s[i]=-1; d[i]=-d[i]; }
if (d[ix]<d[i]) ix=i;
}
for (i=0;i<n;i++) c[i]=d[ix];
}
void start(double *fp0) // this will add the subpixel offset according to first point as double
{
int i; start();
for (i=0;i<n;i++)
{
if (s[i]<0) c[i]=double(double(d[ix])*( fp0[i]-floor(fp0[i])));
if (s[i]>0) c[i]=double(double(d[ix])*(1.0-fp0[i]+floor(fp0[i])));
}
}
bool update()
{
int i;
for (i=0;i<n;i++){ c[i]-=d[i]; if (c[i]<=0){ c[i]+=d[ix]; p[i]+=s[i]; }}
return (p[ix]!=p1[ix]+s[ix]);
}
};
start() init the variables and position for the DDA (from p0,p1 control points) and the update() is just single step of the DDA ... Resulting iterated point is in p
s is the step, d is delta, c is counter and ix is major axis index.
The usage is like this:
DDA<3> A; // 3D
A.p0[...]=...; // set start point
A.p1[...]=...; // set end point
for (A.start();A.update();)
{
A.p[...]; // here use the iterated point
}
DDA go through
well DDA is just interpolation (rasterization) of integer positions on some line between two endpoints (p0,p1). The line equation is like this:
p(t) = p0 + t*(p1-p0);
t = <0.0,1.0>
however that involves floating math and we want integer so we can rewrite to something like this:
dp = p1-p0
D = max (|dp.x|,|dp.y|,|dp.z|,...)
p.x(i) = p0.x + (dp.x*i)/D
p.y(i) = p0.y + (dp.y*i)/D
p.z(i) = p0.z + (dp.z*i)/D
...
i = { 0,1,...D }
where i,D is matching the major axis (the one with biggest change). If you look closer we are using *i/D Which is slow operation and we usually want speed so we can rewrite the therm (exploiting the fact that i goes from 0 to D in order) into something like this (for x axis only the others will be the same with different indexes ...):
p.x=p0.x; // start position
s.x=0; d.x=p1.x-p0.x; // step and abs delta
if (d.x>0) s.x=+1;
if (d.x<0){ s.x=-1; d.x=-d.x; }
D = max(d.x,d.y,d.z,...); // major axis abs delta
c.x=D; // counter for the iteration
for (i=0;i<D;i++)
{
c.x-=d.x; // update counter with axis abs delta
if (c.x<=0) // counter overflowed?
{
c.x+=D; // update counter with major axis abs delta
p.x+=s.x; // update axis by step
}
}
Now take a closer look at the counter c which we are adding the D and substracting d.x which is the i/D rewrited into D iterations. All the other axises are computed in the same manner you just need to add counter c step s and abs delta d for each axis ...
btw if it helps look at this:
volumetric GLSL back ray tracer
which is (I assume) what you are doing but implemented in GLSL shader (see the fragment code) however it does not use DDA instead it is adding unit direction vector to initial position until hit something or end of voxel space ...
btw its based on:
Ray Casting with different height size
just like the link of yours.
[Edit] wrong hits (guessed from your comments)
That has most likely nothing to do with DDA. Its more like edge case when you are standing directly on cell crossing or have wrongly truncated position or have wrongly z-sorted the hits. I remember I was having trouble with it. I ended up with very weird solution in GLSL see the Link above and see the fragment code. Look for
// YZ plane voxels hits
its directly after the non "DDA" ray casting code. It detects which plane of the voxel will be hit I think you should do something similar. It was weird as in 2D DOOM (also in links above) I had no such problems... but that was due to fact they were solved by different math used (suitable only for 2D).
The GLSL code just before the casting of ray changes a position a bit to avoid edge cases. pay attention to the floor and ceil but mine works on floats so it would need some tweaking for int math. Luckily I was repairing my other ray casting engine based on this:
Comanche Voxel space ray casting
And the solution is to offset the DDA by subpixel start position of the ray. I updated the DDA code above the new usage is:
DDA<3> A; // 3D
A.p0[...]=...; // set start point
A.p1[...]=...; // set end point
for (A.start(start_point_as_double[3]);A.update();)
{
A.p[...]; // here use the iterated point
}
Also on second taught make sure that in your DDA the c,d,s are integers if they are floating instead then it might cause the problems you are describing too...

Unity, get the "actual" current Terrain?

Unity has a function Terrain.sampleHeight(point) which is great, it instantly gives you the height of the Terrain underfoot rather than having to cast.
However, any non-trivial project has more than one Terrain. (Indeed any physically large scene inevitably features terrain stitching, one way or another.)
Unity has a function Terrain.activeTerrain which - I'm not making this up - gives you: the "first one loaded"
Obviously that is completely useless.
Is fact, is there a fast way to get the Terrain "under you"? You can then use the fast function .sampleHeight ?
{Please note, of course, you could ... cast to find a Terrain under you! But you would then have your altitude so there's no need to worry about .sampleHeight !}
In short is there a matching function to use with sampleHeight which lets that function know which Terrain to use for a given xyz?
(Or indeed, is sampleHeight just a fairly useless demo function, usable only in demos with one Terrain?)
Is there in fact a fast way to get the Terrain "under you" - so as to
then use the fast function .sampleHeight ?
Yes, it can be done.
(Or indeed, is sampleHeight just a fairly useless demo function,
usable only in demos with one Terrain?)
No
There is Terrain.activeTerrain which returns the main terrain in the scene. There is also Terrain.activeTerrains (notice the "s" at the end) which returns active terrains in the scene.
Obtain the terrains with Terrain.activeTerrains which returns Terrain array then use Terrain.GetPosition function to obtain its position. Get the current terrain by finding the closest terrain from the player's position. You can do this by sorting the terrain position, using Vector3.Distance or Vector3.sqrMagnitude (faster).
Terrain GetClosestCurrentTerrain(Vector3 playerPos)
{
//Get all terrain
Terrain[] terrains = Terrain.activeTerrains;
//Make sure that terrains length is ok
if (terrains.Length == 0)
return null;
//If just one, return that one terrain
if (terrains.Length == 1)
return terrains[0];
//Get the closest one to the player
float lowDist = (terrains[0].GetPosition() - playerPos).sqrMagnitude;
var terrainIndex = 0;
for (int i = 1; i < terrains.Length; i++)
{
Terrain terrain = terrains[i];
Vector3 terrainPos = terrain.GetPosition();
//Find the distance and check if it is lower than the last one then store it
var dist = (terrainPos - playerPos).sqrMagnitude;
if (dist < lowDist)
{
lowDist = dist;
terrainIndex = i;
}
}
return terrains[terrainIndex];
}
USAGE:
Assuming that the player's position is transform.position:
//Get the current terrain
Terrain terrain = GetClosestCurrentTerrain(transform.position);
Vector3 point = new Vector3(0, 0, 0);
//Can now use SampleHeight
float yHeight = terrain.SampleHeight(point);
While it's possible to do it with Terrain.SampleHeight, this can be simplified with a simple raycast from the player's position down to the Terrain.
Vector3 SampleHeightWithRaycast(Vector3 playerPos)
{
float groundDistOffset = 2f;
RaycastHit hit;
//Raycast down to terrain
if (Physics.Raycast(playerPos, -Vector3.up, out hit))
{
//Get y position
playerPos.y = (hit.point + Vector3.up * groundDistOffset).y;
}
return playerPos;
}
Terrain.GetPosition() = Terrain.transform.position = position in world
working method:
Terrain[] _terrains = Terrain.activeTerrains;
int GetClosestCurrentTerrain(Vector3 playerPos)
{
//Get the closest one to the player
var center = new Vector3(_terrains[0].transform.position.x + _terrains[0].terrainData.size.x / 2, playerPos.y, _terrains[0].transform.position.z + _terrains[0].terrainData.size.z / 2);
float lowDist = (center - playerPos).sqrMagnitude;
var terrainIndex = 0;
for (int i = 0; i < _terrains.Length; i++)
{
center = new Vector3(_terrains[i].transform.position.x + _terrains[i].terrainData.size.x / 2, playerPos.y, _terrains[i].transform.position.z + _terrains[i].terrainData.size.z / 2);
//Find the distance and check if it is lower than the last one then store it
var dist = (center - playerPos).sqrMagnitude;
if (dist < lowDist)
{
lowDist = dist;
terrainIndex = i;
}
}
return terrainIndex;
}
It turns out the answer is simply NO, Unity does not provide such a function.
You can use this function to get the Closest Terrain to your current Position:
int GetClosestTerrain(Vector3 CheckPos)
{
int terrainIndex = 0;
float lowDist = float.MaxValue;
for (int i = 0; i < _terrains.Length; i++)
{
var center = new Vector3(_terrains[i].transform.position.x + _terrains[i].terrainData.size.x / 2, CheckPos.y, _terrains[i].transform.position.z + _terrains[i].terrainData.size.z / 2);
float dist = Vector3.Distance(center, CheckPos);
if (dist < lowDist)
{
lowDist = dist;
terrainIndex = i;
}
}
return terrainIndex;
}
and then you can use the function like this:
private Terrain[] _terrains;
void Start()
{
_terrains = Terrain.activeTerrains;
Vector3 start_pos = Vector3.zero;
start_pos.y = _terrains[GetClosestTerrain(start_pos)].SampleHeight(start_pos);
}
public static Terrain GetClosestTerrain(Vector3 position)
{
return Terrain.activeTerrains.OrderBy(x =>
{
var terrainPosition = x.transform.position;
var terrainSize = x.terrainData.size * 0.5f;
var terrainCenter = new Vector3(terrainPosition.x + terrainSize.x, position.y, terrainPosition.z + terrainSize.z);
return Vector3.Distance(terrainCenter, position);
}).First();
}
Raycast solution: (this was not asked, but for those looking for Solution using Raycast)
Raycast down from Player, ignore everything that has not Layer of "Terrain" (Layer can be easily set in inspector).
Code:
void Update() {
// Put this on Player! Raycast's down (raylength=10f), if we hit something, check if the Layers name is "Terrain", if yes, return its instanceID
RaycastHit hit;
if (Physics.Raycast (transform.localPosition, transform.TransformDirection (Vector3.down), out hit, 10f, 1 << LayerMask.NameToLayer("Terrain"))) {
Debug.Log(hit.transform.gameObject.GetInstanceID());
}
}
At this point already, you have a reference to the Terrain by "hit.transform.gameObject".
For my case, i wanted to reference this terrain by its instanceID:
// any other script
public static UnityEngine.Object FindObjectFromInstanceID(int goID) {
return (UnityEngine.Object)typeof(UnityEngine.Object)
.GetMethod("FindObjectFromInstanceID", System.Reflection.BindingFlags.NonPublic | System.Reflection.BindingFlags.Static)
.Invoke(null, new object[] { goID });
}
But as written above, if you want the Terrain itself (as Terrain object) and not the instanceID, then "hit.transform.gameObject" will give you the reference already.
Input and code snippets taken from these links:
https://answers.unity.com/questions/1164722/raycast-ignore-layers-except.html
https://answers.unity.com/questions/34929/how-to-find-object-using-instance-id-taken-from-ge.html

NaN values in C# side after CUDA computations

I created a C# program that uses managedCUDA to calculate the bodies interaction between lots of "planets" or "balls". I got CUDA to work properly with single float and int calculations as tests, but now with arrays, it doesn't seem to work properly. I have the same struct define both in my C# program and in the kernel :
struct Ball
{
float2 position;
float2 velocity;
float mass;
};
Here's the code I use for initializing the kernel in my C# program :
//initializes the CUDA context
cuda = new CudaContext();
//Loads the two kernels, velocity calculation and positions updating according to the velocity
UpdateBallGravity = cuda.LoadKernel("kernel.ptx", "UpdateBallGravity");
UpdateBallPosition = cuda.LoadKernel("kernel.ptx", "UpdateBallPosition");
//allocates gpu memory for a new Ball[] and copies it
d_balls = new Ball[1024];
//generates new balls on the gpu memory
Random random = new Random();
for (int i = 0; i < d_balls.Size; i++)
{
d_balls[i] = new Ball(
(float)random.NextDouble() * ClientSize.X,
(float)random.NextDouble() * ClientSize.Y,
(float)random.NextDouble() * 20000);
}
When I'm about to render, I put a breakpoint to check the values comming from the gpu and find that, after updating the balls velocities and positions, I get NaN in both the position and velocities members of each ball. The mass doesn't change, since I didn't modify it in the kernel. Here are both kernels :
__global__ void UpdateBallGravity(Ball *balls, int ballCount, float gravityInfluence)
{
int idx = getGlobalIdx_3D_3D();
if (idx >= ballCount)
return;
float2 gravity = float2();
for (int i = 0; i < ballCount; i++)
{
if (i == idx)
continue;
Ball remote = balls[i];
float2 difference = make_float2(remote.position.x - balls[idx].position.x, remote.position.y - balls[idx].position.y);
float f = (balls[idx].mass + remote.mass) / lengthSquared2f(difference);
gravity.y += difference.y*f;
}
balls[idx].velocity.x += gravity.x*gravityInfluence;
balls[idx].velocity.y += gravity.y*gravityInfluence;
}
__global__ void UpdateBallPosition(Ball *balls, int ballCount)
{
int idx = getGlobalIdx_3D_3D();
if (idx >= ballCount)
return;
balls[idx].position.x += balls[idx].velocity.x;
balls[idx].position.y += balls[idx].velocity.y;
}

Categories

Resources