I am to make a mobile painting game in Unity and I've encountered a serious problem: the Input class in Unity is frame-dependent. Thus I can't get the position of touch frequent enough to make my application draw smoothly; as a result I get something like just points on the background, not connected between each other.
I tried to just connect the points that are detected in Unity, and than my result was just the same points connected with lines, of course. I was trying this in Unity Editor with about 180-200 fps, and on mobile phone with 30-50 fps it looks even worse. I expect that I have to get the touch positions somehow in android studio or Xcode, and only then use them in my C# code in Unity editor.
Am I thinking right to use extern from Unity tools, or there is another easier way to do it directly in Unity? If there is none and I am right, can somebody give me some links to guides/tutorials how to do it and integrate it with Unity? I have never worked outside of Unity and have no experience in integration some external tools with it.
Note: I've tried FixedUpdate without any luck - it doesn't matter how often I try to get the position variables, it is about how often they are updated; I also tried Event.current.mousePosition(in unity editor) in OnGUI method, but it also gave me no difference.
Upd: As I have already said, I need to get positions more frequently than the Input class gives me. It updates not fast enough! Here's what I get without connecting the points. The image shows the mousePosition detection frequency in 180-200 fps. On phones it is even slower!
Upd: Here is my simplified code.
void Draw() //this method is invoked every frame
{
//some calculations of x and y based on Input variables
currentMousePosition = new Vector2( x, y); //current mouse position on sprite
if(currentMousePosition != previousMousePosition)
{
while(currentMousePosition != previousMousePosition)
{
mySprite.texture.SetPixels((int)previousMousePosition.x, (int)previousMousePosition.y, 3,3, myColorArray);
if (currentFrameMousePos.x > previousFrameMousePos.x)
previousFrameMousePos.x++;
if (currentFrameMousePos.x < previousFrameMousePos.x)
previousFrameMousePos.x--;
if (currentFrameMousePos.y > previousFrameMousePos.y)
previousFrameMousePos.y++;
if (currentFrameMousePos.y < previousFrameMousePos.y)
previousFrameMousePos.y--;
}
} else mySprite.texture.SetPixels((int)currentMousePosition.x, (int)currentMousePosition.y, 3,3, myColorArray);
previousMousePosition = currentMousePosition;
}
//mySprite.texture.Apply() is invoked independently in another place to improve performance
The issue is, it is not possible to queue up touch positions that occurred mid frame so by "Quickly" sliding your finger you will miss certain texels on your image. You should look at this line formula Bresenham's line algorithm. This is super fast, and all integer math. Inside your Update() function call this method.
Vector2 oldPoint;
public void UpdateDrawPoint(Vector2 newPoint){
BresenhamLine(newPoint, oldPoint);
oldPoint = newPoint;
}
Related
I'm currently trying to implement a complex search field into a unity project I'm working on, where a user has the ability to spawn points into a scene. My goal is to create a custom shape for that user based on the points they created, and then I'd like to detect whether or not other objects are inside that shape, similar to the detection of a point inside a complex hull (still shaky on the theory behind that, but an example can be found here). If possible, I'd also like the shape to update itself if the points are later moved, giving it an almost elastic, stretchy feel.
So far, every tutorial or resource I've found online does the exact same basic example, where a script assigns new verts, UVs and triangles to a custom mesh to make a plane using two triangles, but this is frustratingly simple, and decidedly unhelpful when I simply don't know what the final shape will look like, or what triangles to actually draw even when the user has as little as five points in the scene.
As of right now, the closest visual representation I could come up with has a List keep track of the user's points, and a script that just draws a bunch of pseudo-triangles using LineRenderers to connect every point, even ones that aren't exterior faces, by iterating through the List multiple times. While this looks close to what I want, it isn't actually useful in any way, as I don't know how to 'fill' those faces, and I'm still relatively lost when it comes to whether or not an object is inside that hull, like the red sphere shown in the example below.
I can also destroy and redraw those lines repeatedly during the Update() method, which allows me to grab a point and move it around, resulting in the shape dynamically changing, but this results in an undesirable flashing effect that I'd sooner avoid for now.
As this is such a broad question, I've also included the method I'm using to draw these lines below, which parents a bunch of lines shaped like triangles to an empty game object for easy destruction and recreation:
void drawHull()
{
if (!GameObject.Find("hullHold"))
{
hullHold = new GameObject();
hullHold.name = "hullHold";
}
foreach(GameObject point in points)
{
for (int i = points.IndexOf(point); i < points.Count - 2; i++)
{
lineEdge = Instantiate(lineReference);
lineEdge.name = "Triangle" + i;
lineEdge.GetComponent<LineRenderer>().startColor = Color.black;
lineEdge.GetComponent<LineRenderer>().endColor = Color.black;
lineEdge.GetComponent<LineRenderer>().positionCount = 3;
lineEdge.GetComponent<LineRenderer>().SetPosition(0, points[points.IndexOf(point)].transform.position);
lineEdge.GetComponent<LineRenderer>().SetPosition(1, points[i + 1].transform.position);
lineEdge.GetComponent<LineRenderer>().SetPosition(2, points[i + 2].transform.position);
lineEdge.GetComponent<LineRenderer>().loop = true;
lineEdge.SetActive(true);
lineEdge.transform.SetParent(hullHold.transform);
}
}
}
If anyone has encountered a similar problem somewhere else that I simply couldn't find, please let me know! Anything from more knowledge on creating a custom mesh to a more in depth and beginner-friendly explanation on determining if a point is inside a convex hull would be quite helpful. If it's at all relevant, I am working in VR and running version 2018.2.6f1 to ensure that the Oculus rift package and Unity play nice, but I haven't been having any issues working in an environment a few months behind.
Thanks!
You do mention assinging verts and tris to a mesh - this is the 'right' way to do it in unity, as it ties in with highly optimized MeshRenderer, and also I believe Colliders are able to use meshes where given the shapes like that, so you should be able to just plug into PhysX and query it for colliders overlaping with your mesh. Doing it by hand, aka iterating through faces and establishing actual bounds of your object is actually pretty hard to do effectively
Recently I develop a VR project so need use gyroscope,here is a code which I found in unity manual:
// The Gyroscope is right-handed. Unity is left handed.
// Make the necessary change to the camera.
void GyroModifyCamera()
{
transform.rotation = GyroToUnity(Input.gyro.attitude);
}
private static Quaternion GyroToUnity(Quaternion q)
{
return new Quaternion(q.x, q.y, -q.z, -q.w);
}
Sorry for m poor math and English, can anyone give me some guide to explain the meaning of GyroToUnity function?
The gyro Input.gyro.attitude sensor value is returned in Right-Handed coordinates but Unity uses the Left-Handed coordinates system. You can read about about both coordinate system here.
A simple Image that illustrates the difference:
The GyroToUnity function is simply used to do conversion from Gyro(Right-Handed coordinates) to Camera (Left-Handed coordinates). It looks like it is flipping the direction of the up/down and left/right values from the Gyro sensor when the device is moving. After it is flipped, the new flipped value is returned and then assigned to the camera.
This is where the conversion/flipping is done:
return new Quaternion(q.x, q.y, -q.z, -q.w);
Hence the -q.z and -q.w. The minus sign is used to flip the values.
What happens if the conversion is performed or if the GyroToUnity function is not used?
If the device is attached to your face and you move your head up, the camera would move down and vice-versa. If you move it right, the camera would move left and vice-versa. This is supposed to be doing the opposite. That's why that function is used.
The easiest way to see what's happening is if you try it yourself. Bypass the GyroToUnity function then test it. You will see the difference.
void GyroModifyCamera()
{
transform.rotation = Input.gyro.attitude;
}
Now, compare it with the original version of the code:
void GyroModifyCamera()
{
transform.rotation = GyroToUnity(Input.gyro.attitude);
}
I have a ball game object that I am making bounce using explicit physics equations. I tried using unity's physics engine, but It was not exact enough. I am setting the balls position like so:
float dTime = Time.time - timeSinceBounce;
ballPosition.x -= meterOffset / sensitivity;
ballPosition.y = ballInitialPosition.y + GameConstants.Instance.GetBallYVel ()
* dTime - (float)0.5 * GameConstants.Instance.GetGravity () * dTime * dTime; //Yi+Vy*t-0.5(G)(time^2)
ballPosition.z = (-dTime * GameConstants.Instance.GetBallZVel ()
+ ballInitialPosition.z) * startConstant; //Zi + t*Vz
ball.transform.position = ballPosition;
This code runs in the Update() method. When a collision is detected the time and initial position as reset, and this works. Currently the ball bounces properly but shakes when it is moving quickly. The FPS is at 80. I don't think my equations are wrong because it bounces properly there is just a shake. What is the best way to remove the shake? Thank!
Edit 1: I decided to write my own physics because Unity's physics engine had a lot of other components like friction and roll that were making the ball's bounces vary. Also the camera's x and y is fixed the only thing that is changing is the camera's z position. It works a lot better in fixed update
As of asker's request I hereby add all the details I can, even though without seeing the actual, full and complete code along with seeing this "shake" effect it's challenging to give an exact and proper answer. I'll try and do my best though.So let's do this.
1. "Shaking" reason: most probably you set up some kind of animation, along with a forced positioning/rotation/etc and these two codes work against each other. Double-check your AnimatorController and Animator as well as your Mecanim if you have set/configure these (also a good read: "Locomotion").You can also keep the Animator Window open while you are testing from Unity UI, to see who is the tricky guy doing the "shaking" effect for you, and get one step closer to why it happens.If these don't help, start from scratch: Minimize the code you wrote to move the ball, remove every effect, etc and add them back one by one. Sooner or later "shakieness" will come back (if it persist with the cleanest possible code, see above, Animator/Locomotion issue)
2. Think again not rewriting Unity Physics... You mention for example "...physics engine had a lot of other components like friction and roll that were making the ball's bounces vary...".Now, this is something you can turn off. Completely if you want. Easiest way is to set everything to zero on Physics Material (i.e. friction and bouncieness) but if you want Unity to ignore all and every "actual physics" and let you do what you want with your GameObject, tick in the isKinematic property of the Rigidbody. The fun part of it, you'll still collide if and where you want, can AddForce, etc yet neither gravity nor friction or "anything physical" affect you.If you are not comfy with Unity (and you are not it seems), I highly recommend learning about it a bit first as it is a very good and very powerful engine - so don't throw it's features away as they don't work as you initially expected they'll do. For example, Sebastian Lague just started a great beginner series a few weeks ago. The guy is a teacher, his videos are both short, fun and full of information so I can just recommend his channel.
3. As of your code I would make these changes:
public static class GameConstants { //no monobehaviour == can be a "proper, global" singleton!
public static float GetGravity {
get { return [whatever you want]; }
//why not Physics.gravity, using locally when and where it's needed?
}
//etc, add your methods, properties and such
//use this approach if and only IF you access these values application wide!
//otherwise, instead of a """singleton""" logic you just implemented,
//consider adding an empty GameObject to the Scene, name it "GM" and
//attach a "GameManager" script to it, having all the methods and
//properties you'll need Scene-wide! You can also consider
//using PlayerPrefs (see help link below) instead of a global static
//class, so you save memory (but will need to "reach out" for the data)
}
private void FixedUpdate() { //FIXED update as you'll work with physics!
//float dTime = Time.time - timeSinceBounce; //not needed
//positioning like this is not recommended, use AddForce or set direction and velocity instead
ballPosition.x -= meterOffset / sensitivity;
ballPosition.y = ballInitialPosition.y + GameConstants.Instance.GetBallYVel ()
* Time.fixedDeltaTime - 0.5f * GameConstants.Instance.GetGravity () * Time.fixedDeltaTime * Time.fixedDeltaTime; //Yi+Vy*t-0.5(G)(time^2)
ballPosition.z = (-dTime * GameConstants.Instance.GetBallZVel ()
+ ballInitialPosition.z) * startConstant; //Zi + t*Vz
ball.transform.position = ballPosition;
}BUT to be honest,
I would just throw that code away, implement the Bouncing Ball example Unity Tutorials offer, alter it (see point #2) and work my way up from there.
Items not linked but I mentioned in code:
PlayerPrefs, Physics.gravity and Rigidbody.velocity.
Note that 'velocity' is ignoring parts of physics (speeding up, being "dragged", etc) i.e. objects suddenly start moving with the speed you set (This is why Unity5 recommends AddForce instead -linked above-)
It might also come handy you can alter the FixedUpdate() frequency via Time Management, but I'm not sure this is yet needed (i.e. just for bouncing a ball)
I hope this helps and that ball is going to bounce and jump and dance and everything you want it to do ;)
Cheers!
So, this seems to be a strange one (to me). I'm relatively new to Unity, so I'm sure this is me misunderstanding something.
I'm working on a VSEPR tutorial module in unity. VSEPR is the model by which electrons repel each other and form the shape (geometry) of atoms.
I'm simulating this by making (in this case) 4 rods (cylinder primitives) and using rigidbody.AddForce to apply an equal force against all of them. This works beautifully, as long as the force is equal. In the following image you'll see the rods are beautifully equi-distant at 109.47 degrees (actually you can "see" their attached objects, two lone pairs and two electron bonds...the rods are obscured in the atom shell.)
(BTW the atom's shell is just a sphere primitive - painted all pretty.)
HOWEVER, in the real world, the lone pairs actually exert SLIGHTLY more force...so when I add this additional force to the model...instead of just pushing the other electron rods a little farther away, it pushes the ENTIRE 4-bond structure outside the atom's shell.
THE REASON THIS SEEMS ODD IS...two things.
All the rods are children of the shell...so I thought that made them somewhat immobile compared to the shell (I.e. if they moved, the shell would move with them...which would be fine).
I have a ConfiguableJoint holding the rods to the center of the atoms shell ( 0,0,0). The x/y/z-motion is set to fixed. I thought this should keep the rods fairly immutably attached to the 0,0,0 center of the shell...but I guess not.
POINTS OF INTEREST:
The rods only push on each other, by a script attached to one rod adding force to an adjacent rod. they do not AddForce to the atom shell or anything else.
CODE:
void RepulseLike() {
if (control.disableRepulsionForce) { return; } // No force when dragging
//////////////////// DETERMINE IF FORCE APPLIED //////////////////////////////
// Scroll through each Collider that this.BondStick bumps
foreach (Collider found in Physics.OverlapSphere(transform.position, (1f))) {
// Don't repel self
if (found == this.collider) { continue; }
// Check for charged particle
if (found.gameObject.tag.IndexOf("Charge") < 0) { continue; }// No match "charge", not a charged particle
/////////////// APPLY FORCE ///////////////
// F = k(q1*q2/r^2)
// where
// k = Culombs constant which in this instance is represented by repulseChargeFactor
// r = distance
// q1 and q2 are the signed magnitudes of the charges, in this case -1 for electrons and +1 for protons
// Swap from local to global variable for other methods
other = found;
/////////////////////////////////
// Calculate pushPoints for SingleBonds
forceDirection = (other.transform.position - transform.position) * magnetism; //magnetism = 1
// F = k(q1*q2/distance^2), q1*q2 ia always one in this scenario. k is arbitrary in this scenario
force = control.repulseChargeFactor * (1 / (Mathf.Pow(distance, 2)));
found.rigidbody.AddForce(forceDirection.normalized * force);// * Time.fixedDeltaTime);
}//Foreach Collider
}//RepulseLike Method
You may want to use spheres to represent electrons, so apply forces onto them and re-orient(rotate) rods according to this sphere.
I think I found my own answer...unless someone has suggestions for a better way to handle this.
I simply reduced the mass of the electron rods, and increased the mass of the sphere. I'm not sure if this is the "best practices" solution or a kluge...so input still welcomed :-)
My question is about if there is a way to know the coordinates of the corners of a gameobject. What I have is three Vuforia AR targets and a gameobject, a cube in this case.
What I need to achieve, is so that when I move the targets around, the corners of the cube would follow the targets, eg. the cube would be as wide as the space between the targets.
Right now how it does it, is that it checks the distance between the targets and sets the scale of the cube according to it. This results in the cube being expanded always from its set position, which makes the positioning of the targets awkward in real life.
Here's a couple of pictures showing what it does now, taken during execution:
Here is the code attached to the cube-object:
using UnityEngine;
using System.Collections;
public class show : MonoBehaviour {
float distancex;
float distancez;
// Use this for initialization
void Start () {
renderer.enabled = false;
}
// Update is called once per frame
void Update () {
if (GameObject.Find ("target1").renderer.enabled && GameObject.Find ("target2").renderer.enabled &&
GameObject.Find ("target3").renderer.enabled && GameObject.Find ("target4").renderer.enabled)
{
renderer.enabled = true;
}
distancex = Vector3.Distance ((GameObject.Find ("target1").transform.position), (GameObject.Find ("target2").transform.position));
distancez = Vector3.Distance ((GameObject.Find ("target2").transform.position), (GameObject.Find ("target3").transform.position));
Debug.Log (distancex);
Debug.Log (distancez);
transform.localScale = new Vector3((distancex), transform.localScale.y, transform.localScale.z);
transform.localScale = new Vector3(transform.localScale.x, transform.localScale.y, (distancez));
Debug.Log (transform.localScale);
}
}
How do I make the corners of the object follow the targets? I need the width of the side to be the width of the distance between the targets, is there any way to achieve this without using the scale?
I know this is quite some time after you asked the question, but I came across this as I was looking to sort something similar out myself.
What I found I needed (and others may be helped) is to use Renderer.bounds
What it looks like in practice for me:
void Update () {
rend = currentGameObject.GetComponent<Renderer>();
Debug.Log(rend.bounds.max);
Debug.Log(rend.bounds.min);
}
My object in this case was a quad at position 0,0,200 and a scale of 200,200,1. The output of rend.bounds.max is (100.0,100.0,200.0) the min was (-100.0,-100.0,200.0). This gives you the corner position for each corner (granted my example was in 2D space, but this should work for 3d as well).
To get it a little more specific for your example if you wanted the top right corner, you could get the XY of renderer.bounds.max, for the top left you would get the renderer.bounds.max.y and the renderer.bounds.min.x. Hope that helps!
Cheers!
Create 8 empty game objects.
Make them children of the "cube" object to be tracked.
Move them in Editor to be at the 8 corners of your tracked game object. Since they are children of the tracked object, their positions will be {+/- 0.5, +/- 0.5, +/- 0.5}.
Name them memorably (i.e., "top-left-back corner").
These 8 objects will move and scale with the tracked object, staying at its corners. Any time you want to know where a corner is, simply reference one or more of the 8 game objects by name and get the Vector3 for each of their transform.position.
Note: If you're doing step 5 repeatedly, you may want to avoid potentially costly GameObject.Find() calls (or similar) each time. Do them once and save the 8 game objects in a variable.
I worked my way around the problem by determining the position between the targets and counting their mean.
This is not flawless, however, since it is difficult to determine the scale factor with which the models would accurately follow the targets' corners, when bringing external models to use with the system.