How to let an EventSystem raycast through a part of UI - c#

I'm using a modern (Event System) UI approach in Unity. I have a screen space Canvas, some interactable world space elements, and my camera is properly set up with a Physics Raycaster. I want some of my screen space Canvas elements to let rays through and hit the world space elements.
I set the Event Mask on the Physics Raycaster to just the UI layer, and the elements I want to ignore are on another layer, but that doesn't seem to do anything.
Here's a picture:
The panel itself and the labels on top and bottom are set to the Ignore Raycast layer.
I'm using Unity 2017.4 LTS.

Set off the raycast target on the UI elements you want to only be visible but no interaction.
https://docs.unity3d.com/ScriptReference/UI.Graphic-raycastTarget.html

Related

Displaying a Canvas over a GameObject but it's not orienting properly

I'm trying to display some Particles and Slider (like a health bar) over a GameObject in Unity 2020.3.26, but even if in the Scene it's showing like I want, when playing, the canvas seems to orient itself "too much" and I don't know how to make it so both the Particles and Slider stays in place.
As shown in the images, I've put a canvas in a GameObject "Aimable Target Body", and when I look at the center of the object it works as I want.
But if I start looking away, instead of staying centered, the UI goes a little away (don't mind the big blue magic circle in the middle, it's supposed to be there).
I don't think I have a problem with my Canvas settings (I make it so it's always facing the camera) but I don't see where else it could go wrong...
Thanks in advance for your answers !
Game Objects
Scene View
Correct UI
Wrong UI
Canvas Settings
The main reason for this is that the target exists in the world, while the slider exists on the canvas. Those two elements have different coordinate systems, and if you want the center of the Circle+Slider to be at the center of the Sphere, you have to project one set of coordinates to another one (World to Screen).
This can be done using the WorldToScreenPoint method on Camera.
Basically, the UI elements (existing on Canvas) will have to check the transform.position of the sphere in the world in Update function, and set their position using _mainCamera.WorldToScreenPoint(sphereTransform.position)

Spawn prefab in front of UI Image

I spawn the prefab in a gameobject and that gameobject is a child of UI Image what I'm trying to accomplish is display the prefab infront of a UI Image. I've tried making z position of gameobject to negative but nothing happens. I can't place the sorting layer of UI Image cause it doesn't exist. What should I do? Is this even possible?
Update
Tried adding second camera. I made gameobject layer to SecondCam. Change settings of second camera to depth only and it's culling mask to SecondCam only then it's layer to SecondCam also. Change settings of main camera culling mask to everything except SecondCam. But this doesn't work, Have I done something wrong?
Second Camera settings
Main Camera settings
If your Gameobject is not one of the UI components you may need to give it another layer and render it with another camera with depth property higher than the UI camera.
You can change the sorting layer on prefabs or any object created on runtime.
If your UI canvas is set to screen overlay, then it with ALWAYS render on top. You have to use a screen space or world space UI, along with Dave's answer.

Detect if an object in 3D space is within/touches a UI image's bounds

I have a square aim set to the middle of the screen, this is an UI image. Now I want to detect if enemies (visualized by pink boxes here) are within this aimbox but also, and this is very important, if only a part of the enemy is within the aimbox.
What I found so far is that I can't use (please correct me if I'm
wrong):
Raycast from midscreen, because then you have to aim directly at the
enemy.
Boxcast because it only creates the box at the target raycast point.
Raycast from the enemy to screen to detect object on screenpoint,
because the ray is cast from mid of the enemy.
Spherecast, because it's not a box and thus unreliable.
Using a box object (with collider) with a huge length into the
screen, because visually it will not retain the size of the aimbox
image.
I have tought about using several raycasts from the screen into the world using start points at 3,6,9,12 o'clock of the aimbox image position as well as from the cornors and midpoint of the UI aimbox image. But this seem like a really unreliable solution.
Here is an image of what I'm trying to achieve.
I need to detect all the boxes that are within, and are partial within, the aimbox UI image.

how to convert coordinate between cameras with Unity3D

I have two cameras,
why I have two cameras?
because I want some game object in front of the Canvas so that it could move around, maybe could have some particles if it feels happy etc.. So:
I set up the main Camera with mask option UI and make it in charge of all the UI stuff, and to make sure it does not overlap everything I set the canvas mode "Screen-Space Camera"
I set up another camera and make it capture everything else except the UI stuff.
set their depth so that the main Camera with UI will behind the other Camera.
things go well now, I could see the game object before the UI stuff. Cheers!
But in some case I what to convert the UI element to the point in the world space, So that I could generate some gameObject near the UI element and maybe Tween it to move to another UI Element, let's take collecting Gems as an example.
Usually, I could just do that with following code:
GameObject go = Instantiate(eff_gem);
go.transform.position = Camera.main.ScreenToWorldPoint(p1.GetComponent<RectTransform>().position);
go.transform.DoMove(p2.GetComponent<RectTransform>().position, 1f).OnComplete(()=>{
Destory(go, 1f);
});
The main idea is to instantiate a gem near a button and make it fly to the count panel.
But since I make the canvas "Screen Space Camera" instead of "Screen Space Overlay", all the above code will be a mess. furthermore, I think I have to convert the screen point of the canvas element to the world space by the other camera, But I can't even get the actual pixel position of the canvas element by
p1.GetComponent<RectTransform>().position now.
So How Can I do that?
I have made a demo project to demonstrate the issue.
var p1R = p1.GetComponent<RectTransform>();
Vector3 p1WorldPos = p1R.TransformPoint(p1R.rect.center);
//use p1WorldPos in your other camera

Screen and World space UI in the same scene

I have a Unity project, where I need to have both - screen space and world space UI elements (It's an AR project so I need to have some buttons in the world space, and others - attached to the screen).
As I understand I shouldn't have more than UI canvas in the scene, so how can I have both Screen and World space UI elements in the same scene?
You can have more than one canvas in Unity scene, just right click on the hierarchy an add 2 canvas, make one world space and other screen space. check the official tutorial here

Categories

Resources