I am developing a game with Vuforia in Unity.
What I am attempting to do is to display a Plane/Canvas/Image while vuforia is detecting the surface so my users will not feel lost while Vuforia is scanning.
But the problem is i've looked into the only 2 available scripts that i can find,
DefaultTrackableEventHandler.cs
and
DefaultInitializationErrorHandler
but I could not pin point the script where it initialize the detection.
So my request is to ask if anyone know which part of the code does Vuforia initialize the scan so i can customise it.
I am assuming you are using ground plane since you have written " detecting the surface "
The plane finder behavior script is responsible for "scanning" and finding horizontal surfaces. What you can do is check vuforia samples from asset store and there you can see this script is doing Hit tests constantly to find positions on the detected surfaces. Therefore, as long as this on Automatic Hit Test is called, it means your device has found a surface and performing hits towards your indicator.
In this example HandleAutomaticHitTest attached to PlaneManager is called every time Automatic Hit Test is called. You can modify this HandleAutomaticHitTest script to achieve what you wanted.
One last note i am not 100% sure but the scanning probably starts when the positional device tracker is started or vuforia started.
What we did was display an "instruction panel" that filled the entire screen when the scene loaded. It allowed the device to detect the environment while the user read the instructions. Then we just had an "Ok" button at the bottom that disabled the panel when clicked, thus revealing the AR experience. See attached screenshot. Instruction Panel
Related
I would like to create and C# script using a Leap Motion hand motion controller to turn tap on and off in Unity.
The Tap has 2 sides, hot and cold and 2 water particle system. If you touch the hot water, hot water will come out if you touch cold, cold water will come out and you can turn each off just by touching cold button sphere and hot button sphere. you can also use the code to interact with other object just by touch.
I have research online for C# script for leap motion on how to trigger action an object and can find one. I have tried and replicate some of the VR tutorial but end up with 19 errors and now can't add script till fixed all of the error in the script.
I have tried some of basic tutorial on C# script YouTube.
if someone can help to write a basic generic code that can be used for multiple of objects.
****Update:** I've created a test scene where I've recreated the usage of a canvas with image and text, primitive game objects and the use of two cameras in addition to the Camera Rig, with target textures set to the same render texture. In this state, it worked, however when I installed and upgraded all materials to via the lightweight render pipeline, the render texture turned pink and would not render anything from the cameras.
Considering this, my next step forward is to remove the lightweight render pipeline via reverting to a previous commit which does not have the lightweight render pipeline.* If you run into the same situation remember if you do not have a previous commit you can revert to, after removing the lightweight render pipeline you will need to create new materials for all your game objects.*
Problem: In one scene within a VR Project project we are using a world space canvas to display interactable UI. When running through the editor we have no issues, however, when we build the project, all UI canvas's become invisible, though with the use of a laser pointer we can still interact with buttons on the canvas.
I've narrowed the cause down to the use of a specific render texture (only one), which is applied to the target texture of two (2) cameras in the scene. The two cameras are used to provide a live feed to a mesh of the view of a device in the scene.
When I set the two cameras (neither are the main camera in the scene) target texture's to null, that is the only way which I can get the Canvas to appear.
When after running a build I always check the output_log.txt file and have not found any errors.
We are using:
Unity 2018.1.3f1,
VRTK 3.3.0a,
Steam VR w/HTC Vive,
Unity's Lightweight Render Pipeline,
Post Process layer
There is only one canvas in the scene, with all UI objects as children of that object. Our Canvas set up:
Note: I've set the VRTK_UI Canvas component to be inactive to check if that was the cause, and it was not.
[
Camera One:
Note: I've tried clicking the "Fix now" under the target texture, with no change or improvement
[
Camera 2:
Note: I've tried clicking the "Fix now" under the target texture, with no change or improvement
[
Mesh we are Rendering to:
[
Render Texture:
[
Main Camera:
[MainCamera]6
The lightweight render pipeline was the issue, removing it allowed for everything to work as expected/needed.
We have faced the same problem with one camera. Disabling the camera and manually call Render() from another script resolve this issue.
I have followed the Unity official Multiplayer Guide below and everything works, except for the fact that when I run two instances on the same computer (1 in build run and the other in the play mode) the characters for some reason auto move in a circle.
I have no idea why this is since I have followed the tutorial exactly, unless I missed something :P I am currently on step 9 (identifying local player) and I stopped there cause my players keep moving in circles.
To Clarify, they aren't spinning in place, they are walking in a circle. Just imagine a person following a dotted circle on the floor, same idea.
This issue only happens when i run two instances (build run mode and play mode in unity). if i try only the play mode in unity, everything works fine.
Has anyone experienced this before?
Unity Multiplayer Tutorial: https://unity3d.com/learn/tutorials/topics/multiplayer-networking/network-manager?playlist=29690
I am on version 2017.2.0f3 <-- maybe this is why? should I update to a different patch?
Thank you in advance
Where I spawn the characters
build and run, player just moves in circles automatically
both build run and play mode, they both again moves in circles automatically
I see a first issue in your code:
PlayerController.cs line 36, you wrote
var bullet = (GameObject)Instantiate(BulletPrefab, BulletSpawn.transform.position, BulletSpawn.transform.rotation);
it should be
var bullet = (GameObject)Instantiate(BulletPrefab, BulletSpawn.position, BulletSpawn.rotation);
Since BulletSpawn is already a Transform. Otherwise bullets might not fire in the gun direction.
I don't have any of the player moving without me pressing keybord key.
Here is a screenshot of 2 build run working good:
I also tried Build run + Unity Editor in game mode, I had no problem.
Maybe the problem comes from your keyboard or the input manager of unity ? Since you are using Input.GetAxis, check this https://docs.unity3d.com/Manual/class-InputManager.html
The issue with your character automatically moving is because you have something plugged into your computer that acts as a controller/joytick. Go into settings for controls and set all joystick to the last joystick #. Make sure you set this for all vertical and horizontal movement. That should do the trick.
For example, if you use a 3D mouse like 3D connexion, it could act as a joystick/controller and auto move your character.
So in this maze kind of game I'm making, I will show the maze to the player for 30 seconds.
What I don't want is that the player taking screenshot of the maze.
I want to do something like Snapchat, or Instagram, how it detects when you take a screenshot of a snap/story.
I'm using C#. It can also prevent user to take screenshot. I don't mind.
Is there a possible way to detect when the user takes screenshots or prevent it in Unity?
No, you can't detect this reliably. They also could make a photo with a digi cam. Furthermore there are endless ways to create a screenshot and the os has no "callback" to inform an application about that. You could try to detect the "print screen" key but as I said there are other screenshot / screen recording tools which could use any hotkey or no hotkey at all. I have never used Snapchat but it seems it's not safe either.
There are even monitors and video projectors which have a freeze mode to keep the current image. You could also run your browser in a virtual machine. There you can actually freeze the whole virtual PC or take screen shots from the virtual screen and an application running inside the VM has no way to even detect or prevent that.
I once had to do something similar. If you just want to do what snapchat did then it can be done but remember that as long as the app is running on anyone's device instead of your server, it can be de-compiled, modified and compiled again so this screenshot detection can be circumvented.
First of all you need to know this about Apple's rule:
2.5.9 Apps that alter or disable the functions of standard switches, such as the Volume Up/Down and Ring/Silent switches, or other native
user interface elements or behaviors will be rejected.
So, the idea of altering what happens when you take a screenshot is eliminated.
What you do is start the game, do the following when you are showing the show the maze to the player for 30 seconds:
On iOS:
Continuously check if the player presses the power and the home button at the-same time. If this happens, restart the game and show the maze to the player for 30 seconds again. Do it over and over again until player stops doing it. You can even disconnect or ban the player if you detect power + the home button press.
On Android:
Continuously check if the player presses the the power and volume down buttons at the-same time. Perform the-same action described above.
You cannot just do this with C#. You have to use make plugins for both iOS and Android devices. The plugin should use Java to the the detection on android and Object-C to do the detection for iOS. This is because the API required is not available in C#. You can then call the Java and Objective-C functions from C#.
Other improvement to make:
Check or external display devices and disable them when you are
showing the maze to the player for 30 seconds. Enable them back
during this time.
When you detect the screenshot button press as described above,
immediate take your own screenshot too. Loop through images on the player's picture gallery and load all the images taken that day.
Compare it with the screenshot you just took and see if they match.
If they do, you are now very sure that the player is trying to cheat.
Take action like banning the player, restarting the game or even
trolling the player by sending their screenshot to the other player. You can also use it as a proof to show that the user is cheating when they complain after being banned.
Finally, you can even go deeper by using OpenCV. When you are
showing the player the maze for 30 seconds, start the front camera of
the device and use OpenCV to continuously check if any object other
than the player's head is in front of the camera. If so, then the
player is trying to take a screenshot with another device. Take
action immediately. You can use machine language to train this.
How far to go depends on how much time you want to spend and how much you care about player's cheating online. The only thing to worry about is players de-compiling the game and removing those features but it is worth implementing.
My Android phone takes screenshots differently. I swipe down from the
top of the screen and select the "Capture" option.
Nothing is always the-same on Android. This is different on some older or different Android devices. You can detect swipe patterns on the screen. The best way to do this is to build a profile that handles each Android device from different manufactures.
For those commenting, this is possible to do. You must do it especially if it is a multiplayer game. Just because a game can be hacked does not mean that a programmer should not implement basic hack prevention mechanism. Basic hack prevention mechanism should be implemented then improved as you get feedback from players.
I am just getting started with ARkit and Unity and tried the most basic thing, placing an object in the scene. I am placing the object 3 units in front of the camera and not attaching it to any anchor or generated plane. When I move my camera the object also slightly moves, which I believe should not happen.
How do I optimize the anchoring of the object in the AR space?
At the start of AR, objects should move with the camera until ARKit is able to "initialize" and get its bearing.
After initialization, objects may move a bit as ARKit's understand of the world improves or if its view is obscured, etc.
However, large-scale movements should not be a common occurrence and small movements are just a nature of the tech right now.
If the "pre-initialization" movement is unsettling, you can consider hiding the object until the ARSession has initialized.
I've not actually used Unity with ARKit, so I can't help you with code there, but feel free to try out Viro React which is an cross-platform mobile AR/VR framework that lets you build AR/VR apps in React Native (and it's free!).