I've been working on a project for a while and ultimately what I have is a minority report style setup with AR screens in front of me. I'm using a unity plugin called lean touch which allows me to tap on the mobile screen and select the 3d objects. I can then move them around and scale them which is great.
I have an undesired side effect though, under a different situation this would be fantastic but in this case it's hurting my head. If you trigger the target using a user defined target the screen appears in front of you with the camera (real world) behind it. I then tap on the item and can drag on my mobile device to interact. The problem is, if someone walks past my mobile in the background the tracking 'pushes' my item off my screen as they walk past.
Under a different setup I could take advantage of this as I wave my hand behind the camera and can interact with the item, unfortunately all I can do is push it from side to side not scale or anything else and it's kind of making the experience problematic. I therefore would like help to find out how to stop the tracking of my objects position if the camera background changes. Thus leaving my objects on screen until I touch the screen to interact.
I'm not sure exactly what needs turning on or off, if it's a vuforia setting or a lean touch unity setting with layers etc or rigidbody or something to add. I need directional advice on which way to go as I'm going in circles at the moment.
Screenshot below.
http://imgur.com/a/NYekP
If I wave my hand behind or someone walks past it moves the AR elements.
Ideas would be appreciated.
The only code that interacts with an object is the below which is on a plane used as a holder object for the interactive screen prefab so lean touch will move the plane as the screen thinks its a gui element and therefore not interactable, the last line is added to hide the original plane so the duplicate isnt visable when I lean touch drag the screen.
using System.Collections;
using System.Collections.Generic;
using UnityEngine;
using Vuforia;
public class activateChildObjects : MonoBehaviour {
public GameObject MyObjName;
//Get the original GUI item to disable on drag of the cloned item
public string pathToDeactivate;
void Update(){
if(DefaultTrackableEventHandler.lostFound == true){
foreach (Transform child in MyObjName.transform) {
child.gameObject.SetActive(true);
}
Debug.Log("GOT IT");
}else{
foreach (Transform child in MyObjName.transform) {
child.gameObject.SetActive(false);
}
Debug.Log("Lost");
}
//Remove original panel
GameObject.Find("UserDefinedTarget/"+pathToDeactivate).SetActive(false);
}
}
EDIT
Ok I haven't stopped it from happeneing but after numerous code attempts I just ticked all the tracking options to try and prevent additional tracking once found and it's not right but it's an improvement. It will have to do for now.
Related
As far as I know, there are ways to manage Animation
Immediately,
Managing in the form of objects.
Managing with Sprite Images.
Is it effective to manage animations
in object format to manage character's joints in 2D animation?
What should I do to make it easier for me to understand Unity Animation?
As a beginner, we need a lot of data. I need your help. Help me.
I am going to explain the animation by manipulating GameObjects.
You need to add an Animator component to the GameObject you wish to animate. The animator component needs an animator controller. You also need to create an animation clip which represents an animation.(Animation controller is automatically created when you create an animation clip)
Now, to get started with animation you need to focus on animation clips. After you add an animation clip, you can record an animation into it. You do this by hitting the record button in the Animation window. While recording, any changes made to the GameObject will be recorded into the animation clip. (For example, you might move your GameObject). Any such change will create a key frame in the Animation timeline. The time point where key frame should be created can be changed.
Unity will interpolate the changes between two keyframes automatically.
However, there is also an animation curve which allows you to define how changes are applied between time points.
After you record animations you can define how transitions between different animations are made in the Animator Window.
unfortunably I am not really sure what your question is about?
For question this might be helpful:
https://www.youtube.com/watch?v=PuXUCX21jJU
Usually you have a Image File with the animation movements of your 2D Image object and using the "Sprite Editor" to cut them out for Unity.
You then add this Clip on a Animimation Component to be added to your "GameObject".
Since this is a "C#" question, maybe you want to know how to access this Compnent. The best is to use it in the "Init()" and add:
var anim = GetComponent();
Now you can use the "Animation" Component to play the configured Animation Clips.
I hope this helps you a little bit.
I currently have a 2D scene with a orthographic camera and I can move my player with my WASD keys which is great. I am wanting to add functionality of click to move as well but I am sort of lost on a approach. I have read/watched some tutorials and everything seems to revolve around the Nav/Mesh system.
My issue though is that my current scene for the ground and walls have Sprite Renderers and/or BoxColliders on them and I cannot have a Sprite Renderer and a Mesh Renderer on the same GameObject. Here is a quick screenshot of what I have :
Now I understand that I can easily create a click to move with a
Camera.main.ScreenToWorldPoint(Input.mousePosition);
and move towards that position with
Vector3.MoveTowards(transform.position, target, speed * Time.deltaTime);
The challenge for me now and the knowledge that I am wanting is if I have something like in the screenshot how can I add some sort of path finding system if I was below the house and clicked above the house that my character would walk around the house to get there?
Do I even need to edit my current sprites for the ground? I had an idea that I would just create extra GameObjects add a Mesh Filter and Mesh Renderer to it with "None" for the Materials and place them like puzzle pieces around my scene which would represent the areas I would want my player to move.
Is that approach I am thinking viable? Is it too much? Is there an easier way?
You can use "NavMeshAgent" to move your player. The "NavMeshAgent" component is attached to a mobile character in the game to allow it to navigate the scene using the NavMesh.
Once you have baked the NavMesh Its easy to use it -
navMeshAgent.SetDestination(target);
Reference -
Video Tutorial to create and use nav mesh, Unity Script reference, Navigation and Path Finding
Follow these steps to learn how to bake a Nav Mesh -
Create a 3D cube and scale it to (20,1,20) to make it floor(also rename it to floor).
Create another 3D cube, place it inside the floor and scale it by 5 on Y axis(rename to house).
Duplicate the cube from step2 and change its position so it doesn't overlap with other house.
Go to window > Navigation. This will open the Navigation panel with object tab selected.
SELECT the floor object in hierarchy. And click on "Navigation static" checkbox.
A popup will ask to enable navigation static for children, click "Yes".
Go to "Bake" tab in navigation panel and click on "bake" button at bottom.
You should be able to see the generated Nav Mesh in blue color.
Screenshot for the same -
I am working on multiplayer collaboration in Unity3d using Smartfox Server2x.
But I wish to change it in to a hide and seek game.
When the hider (third person controller) presses the button "seek me" the seeker tries to find the hider. But I don't know How can I identify when a seeker sees the hider. Is it possible using Raycasting. If yes how? Please help me.
void Update () {
RaycastHit hit;
if(Physics.Raycast(transform.position,transform.forward,out hit,50))
{
if(hit.collider.gameObject.name=="Deccan Peninsula1")
{
Debug.Log("detect.................");
}
if(hit.collider.gameObject.tag =="Tree")
{
Debug.Log("detect.........cube........");
//Destroy(gameObject);
}
}
From Unity Answers by duck:
There's a whole slew of ways to achieve this, depending on the precise
functionality you need.
You can get events when an object is visible within a certain camera,
and when it enters or leaves, using these functions:
OnWillRenderObject, Renderer.isVisible,
Renderer.OnBecameVisible, and OnBecameInvisible
Or you can calculate whether an object's bounding box falls within the
camera's view frustum, using these two functions:
GeometryUtility.CalculateFrustumPlanes
GeometryUtility.TestPlanesAABB
If you've already determined that the object is within the camera's
view frustum, you could cast a ray from the camera to the object in
question, to see whether there are any objects blocking its view.
Physics.Raycast
You could do many things to find out if a seeker has found the hider. I am going to suggest how I would do it and I will try to make the idea/code as efficient as possible.
Each GameObject knows its position via the transform component. You can check how close one object is from the other by creating a ratio and comparing how close they are from each other. The moment both objects are close to each other then you enter a new state. In this state you will fire a RayCast only when the direction/angle of view of the seeker changes. So think of it this way, your seeker is looking around and as he is spinning he is firing the RayCast. The main idea is not to fire way too many RayCasts all the time which will hinder performance. Make sure your RayCast ignores everything except who you are looking for.
If you get stuck you can ask a more specific question, probably regarding RayCast and how to make them ignore walls or not shoot through walls, or maybe you discover that solution on your own.
Good luck!
I'm using MonoGame implementation of XNA to create a Windows Phone game. The player should be able to move objects across the level using flick gestures. I'm using the TouchPanel class to read gestures.
Here's how I initialize the TouchPanel:
TouchPanel.EnabledGestures = GestureType.Flick;
Here is how I read the gestures in Update method:
while (TouchPanel.IsGestureAvailable)
{
var g = TouchPanel.ReadGesture();
...
}
However, the only field that is filled is the Delta vector. But how do I find out the point from where the user has started the gesture?
Since I want my game to be cross-platform, I cannot rely on non-XNA code such as Silverlight gesture handlers.
Using Flick gesture to move objects is not a good idea, as Flicks are positionless.
You should use Hold or FreeDrag gesture instead, detecting the object and then moving it across the screen.
The purpose of Flick is just zooming, I suggest you to choose another gesture.
I have to make a program on particle systems in C# and I'm using Visual studio 2010. I must not use any libraries like OpenGl, ... I can use just Graphisc library from C#. I tried to read some tutorials and lectures but I still don't know how to implement it.
Can anybody help me understand how I should decompose my problem? Or direct me maybe on some useful tutorials? It´s new for me and I'm kinda stuck.
My task:
Program a simple generator luminous particles in the shape of a geyser from a point generator, view the particles as different colored dots moving across a single track on a black background in parallel projection.
Look up BackgroundWorker and RenderTargetBitmap it is probably best to do this in WPF
Psuedo code
backgroundWorker()
{
while(isRunning)
{
update()
draw()
}
}
update()
{
for each all particles
{
update gravity and/or relativity to other particles
}
}
draw()
{
clear bitmap
for each all particles
{
draw your particle
}
set it to your container
}
This is based upon a game loop
Though not a complete answer, here is how I would break down the problem if it were my task:
Find a way to render a single particle (perhaps as a dot or a ball). I recommend starting with WPF.
Find a way to "move" that particle programmatically (i.e. by rendering it in different places over time).
Create a Particle class that can store its current state (position, velocity, acceleration) and can be moved (i.e. SetPosition, SetVelocity...)
Create a World class the represents a 3-dimensional space and keeps track of all the particles in it. The World will know the laws of physics and will be responsible for moving particles around given their current state.
Create a bunch of Particles that have the same initial starting position but random initial acceleration vectors. Add those Particles to the World. The World will automatically update their positions from then on.
Apply what you learned in steps 1 and 2 to have your World graphically represent all of its Particles.
Edit: Step 7 would be to "run step 6 in a loop". The loop code provided by noHDD is a very good basic framework: continuously update the state of the world and then draw the results.