Translate Unity units of measurement? - c#

In Unity one can use Raycasting to calculate various measurements. Examples such as diameter, thickness of a wall, and width. One way to do this is by capturing a users mouse click on an object and using RaycastHits to capture the location of the mouse click on the object and than casting additional rays depending on the measurement desired.
Seen below:
Thickness of the walls clicked is .0098, .0096, and .0072. Width is .0615, .0611, and .060. Diameter is .0475.
Though these measurements are (believed to be) executed and calculated correctly it's unclear how the results translate to real world units of measurement.
This is best demonstrated and shown in the fourth image. Checking the same diameter in other CAD programs, such as NX, the diameter is 0.4210" or inches. Thickness and width were calculated as well at .075244" and .252872" respectively.
So than, how do the results in Unity, (results produced using Vector3.Distance to calculate the distance between two points) translate to real world units of measurement?
Googling the subject yields a common answer: Unity's measurements are "game units" and can be used however desired. While I grasp this, I don't understand how to accomplish the translation of "game units", or whatever Unity's units of measurement truly are, to the measurement results I can see in CAD programs.
Results (CAD x Unity):
Thickness: .075244" x .0098, .0096, and .0072.
Width: .252872" x .0615, .0611, and .060.
Diameter: 0.4210" x .0475
(note1: model scales are identical in Unity and external CAD program.)
(note2: the slight variation in thickness and width results from Unity measurements coming at angles where the CAD program is measuring distance between the two planes, i.e. .009x and .06x.)
(note3: ignore the incorrect labeling of Width in the second visual as 'Thickness' and the inch labeling in all of the Unity visuals, ", as both incorrect).

1 Unity unit is generally held to be 1 meter, however as you've read it's up to your implementation, in this case it looks like you're actually exporting from CAD with 1 inch = 1 unit, since your results seem similar but slightly off.
The reason you're getting innaccuracies is most likely due to Unity's collision system not being extremely accurate, most colliders are in fact slightly larger than the mesh they represent which will throw off your fine tuned measurements significantly, and on top of that Unity will have much lower precision than CAD, since Unity is a game engine and needs to perform in realtime, 3D position data is not very accurate (it gets pretty hazy around 4 digits of precision), and in fact gets significantly worse as you travel away from the origin.
I wouldn't recommend trying to use Unity for any kind of precise design work, especially when representing the real world, but if you're dead set, you might want to scale your objects up by a factor of 10 or 100 in order to keep your digits closer to the decimal point to reduce floating point error, this is a hack obviously.
You may want to also look at your physics settings: https://docs.unity3d.com/Manual/class-PhysicsManager.html
In particular "Default Contact Offset" may be relevant (although I'm not sure if it affects raycasts)
PS: I'd post this as a comment but the rep system won't let me, your description of the measurements between each environment is really confusing, next time maybe try and format it in a table or something?

Related

How to create Unity3d gaussian plume resembling breathing puffs

I'm working on a project that needs a breath effect such that it emits puffs of droplets for visualization. The puffs need to occur about 10-20 times per minute and look like water vapor expelled during cold weather. I've created several particle generators in Unity that use a cone shaped emitter and tried to adjust it to get something similar to a gaussian plume, but all I get are "rings" and I can't get one generator to create short "puffs". I finally have created 3 generators pithing the same "cone", one emits small particles, one medium sized, and the other large, but it does not resemble a collection of particles that change size after being breathed out, i.e. shrinking due to evaporation, slowing down to terminal velocities appropriate to their changing size, drifting upward due to the thermal gradient in the room, etc. Can someone point me to the documentation that explains how to create a particle generator that would provide for spatially distributed, velocity distributed, size distributed particles where the "puff" could be characterized by vz_avg, vz_sigma, vx_avg = vy_avg, vx_sigma = vy_sigma, and be able to have each particle's speed and acceleration a function of it's size, temperature difference between it and background, evaporation due to humidity and temp, etc?
The Unity engine is good at allowing one to put an avatar into a scene, move and control it representing almost realistic looking behavior, BUT, my difficulty is combining video effects in a manner that is physically realistic, i.e. gravity, buoyancy, evaporation, slowing down, etc.
Pointers appreciated.
I managed to get a series of parameters put together that allowed me to perform this task. I had to use three different emitters, one for large droplets, one for mid range droplets, and one for aerosols. Each used a different setting for gravity to emulate falling, hovering, and rising particles, restpectively. By matching the rate of emission of the "puffs" to the human breathing, the number of particles to approximate the number of viral droplets of the different sizes that have been determined to be in a breath, I was able to create a realistic "looking" cloud of particles that are emitted, drift to the floor, hover, and also rise towards the ceiling.
I just thought I'd post and answer so that others might know that it can be done, but takes a lot of working backwards from the actual physics to the way that Unity displays things to get something realistic.

Simulate depressurization in a discrete room

I am trying to build a top down view spaceship game which has destructible parts. I need to simulate the process of depressurization in case of hull breach.
I have a tiled map which has the room partitioning code setup:
What I am trying to do is build some kind of a vector field which would determine the ways the air leaves depressurized room. So in case you would break the tile connecting the vacuum and the room (adjacent to both purple and green rooms), you'd end up with a vector map like this:
My idea is to implement some kind of scalar field (kind of similar to a potential field) to help determine the airflow (basically fill the grid with euclidean distances (taking obstacles into account) to a known zero-potential point and then calculate the vectors by taking into account all of the adjacent tiles with lower potential value that the current tile has:
However this method has a flaw to where the amount of force applied to a body in a certain point doesn't really take airflow bottlenecks and distance into account, so the force whould be the same in the tile next to vacuum tile as well as on the opposite end of the room.
Is there a better way to simulate such behavior or maybe a change to the algorithm I though of that would more or less realistically take distance and bottlenecks into account?
Algorithm upgrade ideas collected from comments:
(...) you want a realistic feeling of the "force" in this context, then it should be not based just on the distance, but rather, like you said, the airflow. You'd need to estimate it to some degree and note that it behaves similar to Kirchoff rule in electronics. Let's say the hole is small - then amount-of-air-sucked-per-second is small. The first nearest tile(s) must cover it, they lose X air per second. Their surrounding tiles also must conver it - they lose X air per second in total. And their neighbours.. and so on. That it works like Dijkstra distance but counting down.
Example: Assuming no walls, start with 16/sec at point-zero directing to hole in the ground, surrounding 8 tiles will get 2/sec directed to the point-zero tile. next layer of surrounding 12 tiles will get something like 1.33/sec and so on. Now alter that to i.e. (1) account for various initial hole sizes (2) various large no-pass-through obstacles (3) limitations in air flow due to small passages - which behave like new start points.
Another example (from the map in question): The tile that has a value of zero would have a value of, say, 1000 units/s. the ones below it would be 500/s each, the next one would be a 1000/s as well, the three connected to it would have 333/s each.
After that, we could base the coefficient for the vector on the difference of this scalar value and since it takes obstacles and distance into account, it would work more or less realistically.
Regarding point (3) above, imagine that instead of having only sure-100%-pass and nope-0%-wall you also have intermediate options. Instead of just a corridor and a wall you can also have i.e. broken window with 30% air pass. For example, at place on the map with distance [0] you've got the initial hole that generates flux 1000/sec. However at distance [2] there is a small air vent or a broken window with 30% air flow modifier. It means that it will limit the amount from incoming (2x500=1000) to 0.3x(2x500)=300/sec that will now flow further to the next areas. That will allow you to depressurize compartments with different speeds so the first few tiles will lose all air quickly and the rest of the deck will take some more time (unless the 30%-modifier window at point [2] breaks completely, etc).

How can you stitch multiple heightmaps together to remove seams?

I am trying to write an algorithm (in c#) that will stitch two or more unrelated heightmaps together so there is no visible seam between the maps. Basically I want to mimic the functionality found on this page :
http://www.bundysoft.com/wiki/doku.php?id=tutorials:l3dt:stitching_heightmaps
(You can just look at the pictures to get the gist of what I'm talking about)
I also want to be able to take a single heightmap and alter it so it can be tiled, in order to create an endless world (All of this is for use in Unity3d). However, if I can stitch multiple heightmaps together, I should be able to easily modify the algorithm to act on a single heightmap, so I am not worried about this part.
Any kind of guidance would be appreciated, as I have searched and searched for a solution without success. Just a simple nudge in the right direction would be greatly appreciated! I understand that many image manipulation techniques can be applied to heightmaps, but have been unable to find a image processing algorithm that produces the results I'm looking for. For instance, image stitching appears to only work for images that have overlapping fields of view, which is not the case with unrelated heightmaps.
Would utilizing a FFT low pass filter in some way work, or would that only be useful in generating a single tileable heightmap?
Because the algorithm is to be used in Unit3d, any c# code will have to be confined to .Net 3.5, as I believe that's the latest version Unity uses.
Thanks for any help!
Okay, seems I was on the right track with my previous attempts at solving this problem. My initial attemp at stitching the heightmaps together involved the following steps for each point on the heightmap:
1) Find the average between a point on the heightmap and its opposite point. The opposite point is simply the first point reflected across either the x axis (if stitching horizontal edges) or the z axis (for the vertical edges).
2) Find the new height for the point using the following formula:
newHeight = oldHeight + (average - oldHeight)*((maxDistance-distance)/maxDistance);
Where distance is the distance from the point on the heightmap to the nearest horizontal or vertical edge (depending on which edge you want to stitch). Any point with a distance less than maxDistance (which is an adjustable value that effects how much of the terrain is altered) is adjusted based on this formula.
That was the old formula, and while it produced really nice results for most of the terrain, it was creating noticeable lines in the areas between the region of altered heightmap points and the region of unaltered heightmap points. I realized almost immediately that this was occurring because the slope of the altered regions was too steep in comparison to the unaltered regions, thus creating a noticeable contrast between the two. Unfortunately, I went about solving this issue the wrong way, looking for solutions on how to blur or smooth the contrasting regions together to remove the line.
After very little success with smoothing techniques, I decided to try and reduce the slope of the altered region, in the hope that it would better blend with the slope of the unaltered region. I am happy to report that this has improved my stitching algorithm greatly, removing 99% of the lines reported above.
The main culprit from the old formula was this part:
(maxDistance-distance)/maxDistance
which was producing a value between 0 and 1 linearly based on the distance of the point to the nearest edge. As the distance between the heightmap points and the edge increased, the heightmap points would utilize less and less of the average (as defined above), and shift more and more towards their original values. This linear interpolation was the cause of the too step slope, but luckily I found a built in method in the Mathf class of Unity's API that allows for quadratic (I believe cubic) interpolation. This is the SmoothStep Method.
Using this method (I believe a similar method can be found in the Xna framework found here), the change in how much of the average is used in determining a heightmap value becomes very severe in middle distances, but that severity lessens exponentially the closer the distance gets to maxDistance, creating a less severe slope that better blends with the slope of the unaltered region. The new forumla looks something like this:
//Using Mathf - Unity only?
float weight = Mathf.SmoothStep(1f, 0f, distance/maxDistance);
//Using XNA
float weight = MathHelper.SmoothStep(1f, 0f, distance/maxDistance);
//If you can't use either of the two methods above
float input = distance/maxDistance;
float weight = 1f + (-1f)*(3f*(float)Math.Pow(input, 2f) - 2f*(float)Math.Pow(input, 3f));
//Then calculate the new height using this weight
newHeight = oldHeight + (average - oldHeight)*weight;
There may be even better interpolation methods that produce better stitching. I will certainly update this question if I find such a method, so anyone else looking to do heightmap stitching can find the information they need. Kudos to rincewound for being on the right track with linear interpolation!
What is done in the images you posted looks a lot like simple linear interpolation to me.
So basically: You take two images (Left, Right) and define a stitching region. For linear interpolation you could take the leftmost pixel of the left image (in the stitching region) and the rightmost pixel of the right image (also in the stitching region). Then you fill the space in between with interpolated values.
Take this example - I'm using a single line here to show the idea:
Left = [11,11,11,10,10,10,10]
Right= [01,01,01,01,02,02,02]
Lets say our overlap is 4 pixels wide:
Left = [11,11,11,10,10,10,10]
Right= [01,01,01,01,02,02,02]
^ ^ ^ ^ overlap/stitiching region.
The leftmost value of the left image would be 10
The rightmost value of the right image would be 1.
Now we interpolate linearly between 10 and 1 in 2 steps, our new stitching region looks as follows
stitch = [10, 07, 04, 01]
We end up with the following stitched line:
line = [11,11,11,10,07,04,01,02,02,02]
If you apply this to two complete images you should get a result similar to what you posted before.

Kinect - Difference between Depth and Joint Position.Z

It seems to me that both depth and position.z measure the distance between the body parts and the camera.
From what I see in examples and questions, (e.g.) the body parts of the tracked human being can be coloured differently based on how far they are from the camera.
As for the skeleton, the position z is limited to the joints that are available through the SDK.
So in conclusion, both provides the same function but depth is more precise. Am I having the wrong concept on depth or missing out any important points?
*I apologize if this question can be easily found on stackoverflow or on other websites. I couldn't find any pages that could answer my query so I've decided to post here instead.
Depth is trivially calculated per-pixel. Joint.Z is optionally calculated per-joint. Joint calculating has a substantial performance cost because the SDK has to analyze the image to figure out which of those millions of pixels is, for example, your left knee. Joint has the benefit of also getting inferred by the SDK based on its understanding of human anatomy so if your left knee happens to be occluded by a wandering puppy, the Joint position will still be pretty accurate because assumptions are made based on other visible joints.
If you are already doing skeleton tracking for x,y of joints then you might as well take advantage of the z that comes with it but otherwise depth will be more efficient.

How to effeciently spread objects on a 2D surface in a "natural" way?

i would like to effeciently generate positions for objects on a given surface. As you probably guessed this is for a game. The surface is actually a 3D terrain, but the third dimension does not matter as it is determined by terrain height.
The problem is i would like to do this in the most effecient and easy way, but still get good results. What i mean by "natural" is something like mentoined in this article about Perlin noise. (trees forming forests, large to small groups spread out on the land) The approach is nice, but too complicated. I need to do this quite often and prefferably without any more textures involved, even at the cost of worse performance (so the results won't be as pretty, but still good enough to give a nice natural terrain with vegetation).
The amount of objects placed varies, but generally is around 50. A nice enhancement would be to somehow restrict placement of objects at areas with very high altitude (mountains) but i guess it could be done by placing a bit more objects and deleting those placed above a given altitude.
This might not be the answer you are looking for, but I believe that Perlin Noise is the solution to your problem.
Perlin Noise itself involves no textures; I do believe that you have a misunderstanding about what it is. It's basically, for your purposes, a 2D index of, for each point, a value between 0 and 1. You don't need to generate any textures. See this description of it for more information and an elegant explanation. The basics of Perlin Noise involves making a few random noise maps, starting with one with very few points, and each new one having twice as many points of randomness (and lower amplitude), and adding them together.
Especially, if your map is discretely tiled, you don't even have to generate the noise at a high resolution :)
How "often" are you planning to do this? If you're going to be doing it 10+ times every single frame, then Perlin Noise might not be your answer. However, if you're doing it once every few seconds (or less), then I don't think that you should have any worries about speed impact -- at least, for 2D Perlin Noise.
Establishing that, you could look at this question and my personal answer to it, which is trying to do something very similar to what you are trying to do. The basic steps involve this:
Generate perlin noise; higher turbulence = less clumping and more isolated features.
Set a "threshold" (ie, 0.5) -- anything above this threshold is considered "on" and anything above it is considered "off". Higher threshold = more frequent, lower threshold = less frequent.
Populate "on" tiles with whatever you are making.
Here are some samples of Perlin Noise to generate 50x50 tile based map. Note that the only difference between the nature of the two are the "threshold". Bigger clumps means lower threshold, smaller clumps means a higher one.
A forest, with blue trees and brown undergrowth
A marsh, with deep areas surrounded by shallower areas
Note you'll have to tweak the constants a bit, but you could do something like this
First, pick a random point. (say 24,50).
Next, identify points of interest for this object. If it's a rock, your points might be the two mountains at 15,13 or 50,42. If it was a forest, it would maybe do some metrics to find the "center" of a couple local forests.
Next, calculate the distance vectors between the the point and the points of interest, and scale them by some constant.
Now, add all those vectors to the point.
Next determine if the object is in a legal position. If it is, move to the next object. If it's not, repeat the process.
Adapt as necessary. :-)
One thing: If you want to reject things like trees on mountains you don't add extra tries, you keep trying to place an object until you find a suitable location or you've tried it a bunch of times and you need to bail out because it doesn't look placeable.

Categories

Resources