Add plane boundary point on touch ARCore? - c#

I am working with ARCore in Unity and would like to be able to add points to ARCore's boundary plane by touching a point on the screen. Is this possible?
More specifically, I want to be able to touch a point on the screen that I think should be part of ARCore's current plane and have that point be added to the plane's mesh.
I've looked around a bit through their documentation and on stackoverflow and haven't been able to find an answer.

This is not able and there is a grand chance that this will never be able.
Reason:
AR works with 3D reality, so it not a 2D dimension. This said when you click on your screen how the app will know how far you wanna click?
If the app is not recognising the plane it will have difficult to know if your click is for 30cm from the screen or 2m from the screen or 35cm from the screen.
Got the problem? if the app know the distance for sure it can decide by itself if is a plane.
The solution would be "put" fiscally the app in places that you wanna build a plane, like 3 coordinates and crate a plane between then. But this is not in the direction for User Experience with AR, so I expected today that no one will stop to do something like...

Related

Determining open directions/areas for player in water polo game - C#/ Unity

Ok heres a fun problem I'm working on currently that maybe someone can help me with:
I have a water polo game that's in 2.5D (aka 3D but with sprites)
I'm looking to determine which directions are considered "open space" for the player to be able to drift into.
The players have sphere colliders and currently Im doing things like Physics.OverlapSphere to determine who is in the general vicinity, but to make it more realistic I want the players to be able to move to areas that are open.
I was thinking about just checking cardinal directions on a raycast of distance like 2 or 3 and keep a list of available directions the player can drift, but I'm sure theres a magical sort of mathematical formula to determine open angles or something based on where opposing players have been found in the raycast/overlapSphere.
Cant find much online to help because every google search tries to funnel me to "how to move a player in Unity" -_- haha
Any fun little tips, ideas or mathematical directions to point me in (no pun intended haha) would be appreciated and might help others who are trying to make team based games too!
Thanks (:

Wall colliders getting out of screen unity

Basically im developing a 2d pang game style for mobile, where you pop some balls. So i created some walls colliders, but everytime i change the resolution, the left and right walls just dont stick to the right place, or they move out of screen or they move in the screen depending on the resolution. I used them to act as boundaries for the bouncing balls. I have struggle with some many scripts, to try to fit the objects on the screen but i cant find a solution for this. I found similar issues, but there solutions unfortunatilly didnt work :/
Any ideias?
Thank you for all the help
I believe all you need are Screen.currentResolution and Camera.main.ScreenToWorldPoint(). This could make things easier. You can adjust the borders position and scale to match your screen before runtime or after changing screen resolution. Visit unity document to see more details.

RTS Camera: The y and z values seem to be mixed up

I've been trying to figure out how to use the RTS plugin from the unity Asset Store but there seems to be a problem. I have no idea why this is happening, but the z and y values are mixed up. I can scroll left and right perfectly but whenever I press "W" the screen zooms out instead of moving upwards. The same applies to scrolling. When I scroll, the screen moves upwards/downwards and doesn't zoom out like its supposed too. I tried creating my own RTS camera via Brackeys and the same thing happened. His game would zoom, mine would just move upwards. I'm not for sure whats wrong. Im fairly new to all this unity jazz. ANY help would be appreciated
it's a little hard to know exactly why this is with the information you have provided. But if I were you the first thing I would check is your input manager! Have you done that and made sure your inputs correlate to exactly what you want?
It's sounds like your settings may be not be the default, causing the differences?

Camera calibration and predict the eye focusing point on screen OpenCV / EmguCV

I have a project idea that check web usability using eye tracking. for that I needed to predict the focusing point on the screen(i.e. pixel points on screen) in specific time gap(0.5 second).
Here is some additional information:
I intended to use openCV or emguCV but it causing me a bit of trouble beacuse of my inexperience with OpenCV.
I am planning to "flatten" the eye so it appears to move on a plane. The obvious choice is to calibrate the camera to try to remove the radial distortion.
During the calibartion process the user looks at the corners of a grid on a screen. The moments of the pupil are stored in a Mat for each position during the calibration. So I have an image with the dots corresponding to a number of eye postions when looking at corners of a grid on the screen.
is there any article or example I can refer to get a good idea about this scenario and openCV eye prediction??
Thanks!
Different methods of camera calibration are possible and (similar/as like to corner dots method)
Thesis work on Eye Gaze using C++ and openCV and this should for sure help you.You can find some opencv based c++ scripts as well.
FYI :
Some works are presented where it claims Eye Gaze without calibration
Calibration-Free Gaze Tracking an experimental analysis by Maximilian Möllers
Free head motion eye gaze tracking without calibration
[I am restricted to post less than 2 reference links]
To get precise locations of eyes, you need first to calibrate your camera, using Chessboard approach or some other tools. Then you need to undistort the image, if it is not straight.
OpenCV comes already with an eye detector (Haar classifier, refer to eye.xml file), so you can already locate it and track it easily.
Besides that, there's only math to help you matching the discovered eye to the location it is looking at.

How can I Map my hand coordinates to Mouse pointer in KINECT?

I'm doing a kinect project (in WPF) where I need to operate the mouse cursor with my hand.
I was able to track my hand coordinates every time it moving on the window. I want to assign those coordinates to the mouse pointer. But I don't know how to do that. Please somebody help me.
I appreciate your time reviewing and answering my question.
Thank you.
I don't know whether you are using Kinect Interaction. With Kinect Interaction, the hand coordinates in HandPointer there are normalized as 0 to 1 in interaction zone, so just multiply it with approriate resolution will be ok. (The pre-defined Kinect interaction controls in SDK do it too, if the controls can satisfy your goal, you may just use them)
If not so and you are using Skeleton directly, the Skeleton coordinates are based on physical distance in meters, so you must find a properly scale yourself. I think if you intend to use skeleton only, you should normalize Skeleton coordinates yourself, e.g subtract hand position by skeleton position, and measure arm length previously to make both adult and child can use your application without inconvenience. Also notice in skeleton coordinates, Y positive direction is upwards, and in screen it's downwards, you should use a negative scale there.

Categories

Resources