arcore get light source location - c#

new to AR development, just trying to get a sense of the possibilities with ARCore. Anyone know if there's a way to get the 3D location of the brightest visible point in ARCore's API. Hours of googling revealed plenty about shaders that react to environmental light, so the light's location/direction must be retrievable from somewhere in the program. I could always just find the brightest point from the 2D texture from the camera, but since ARCore is already figuring out where everything is in space for you, it seems reasonable that it could tell you where in space the brightest (ie most unique/placeable) point is. Any ideas?

Related

How can I get a 2D cross-section of a 3D object using a shader?

I have an idea for a 2D game that uses 3D models that are cut at the center of the Z axis. However, I have found no way to properly get just the intersection to show.
Culling the front and back of the camera to be extremely thin leaves a small outline of the model, since there's no way for it to render the inside. I could use ray marching, but it would limit me to primitive objects and seems a little overkill for what I need. I've found a lot of cross-section shaders online, but they require the back of the camera to not be culled.
Below is a visual example of what I need (demonstrated with Blender's boolean tool rather than a shader), any help is appreciated as I've been looking for a way to do this for months.
Before Intersection vs After Intersection

OpenTK/GL Texture has weird lines going through it unless camera is in a sweetspot

There's a lot of code so I'm not going to put it here. I'm just wondering if anyone has had a similar issue to this and how they solved it.
I'm trying to create a 3d scene in C# using OpenGL and have got lighting working as well as basic primitive models and more complex models. But now I'm trying to texture the walls and I'm getting these weird lines running through the texture. Whenever I move the camera the lines change and glitch around but are always vertical lines.
I have noticed there is a sweetspot for the camera when looking directly at the wall and being at the right distance from it.
The sweetspot
Rotate left - lines appear
Camera at right angle but wrong distance - texture dissapears
I know it's not a lot to go off so I'm not expecting anyone to be able to tell me exactly what to do. I just want to know if anyone knows what these weird results could be. I'm thinking I may need to change some texture parameters but not sure which ones. Is there a clipping parameter or something? Any sort of guidance here would be appreciated
This looks a lot like z-fighting, which is caused by rendering two surfaces at the same depth. Numerical instability will cause some areas to be drawn over others in a changing pattern.
If you can find out which surfaces you're drawing twice and get rid of one of them, then this problem should go away.

Camera calibration and predict the eye focusing point on screen OpenCV / EmguCV

I have a project idea that check web usability using eye tracking. for that I needed to predict the focusing point on the screen(i.e. pixel points on screen) in specific time gap(0.5 second).
Here is some additional information:
I intended to use openCV or emguCV but it causing me a bit of trouble beacuse of my inexperience with OpenCV.
I am planning to "flatten" the eye so it appears to move on a plane. The obvious choice is to calibrate the camera to try to remove the radial distortion.
During the calibartion process the user looks at the corners of a grid on a screen. The moments of the pupil are stored in a Mat for each position during the calibration. So I have an image with the dots corresponding to a number of eye postions when looking at corners of a grid on the screen.
is there any article or example I can refer to get a good idea about this scenario and openCV eye prediction??
Thanks!
Different methods of camera calibration are possible and (similar/as like to corner dots method)
Thesis work on Eye Gaze using C++ and openCV and this should for sure help you.You can find some opencv based c++ scripts as well.
FYI :
Some works are presented where it claims Eye Gaze without calibration
Calibration-Free Gaze Tracking an experimental analysis by Maximilian Möllers
Free head motion eye gaze tracking without calibration
[I am restricted to post less than 2 reference links]
To get precise locations of eyes, you need first to calibrate your camera, using Chessboard approach or some other tools. Then you need to undistort the image, if it is not straight.
OpenCV comes already with an eye detector (Haar classifier, refer to eye.xml file), so you can already locate it and track it easily.
Besides that, there's only math to help you matching the discovered eye to the location it is looking at.

fingertip detection from depth Image using kinect (c# and wpf)

I'm making a project for my university, I'm working with kinect and i want to detect the finger tips using microsoft sdk.I made depth segmentation based on the closest object and player so i now have only the hand part in my image. But I have no idea what to do next. I'm trying to get the hand's boundary but not finding the method to do it.
so can anyone help me and suggest some usefull ways or ideas to do that.
I am using c# and wpf
It will be great if anyone can provide a sample code. I only want to detect the index finger and the thumb.
Thanks in advance
Btw about finding the boundaries. Think about this.
Kinect gives you position of your hand. Your hand is on some Z position. It`s the distance from the Kinect device. Now every pixel (from depth stream) of your hand is almoust on the same Z position as the join of your hand. Your boundarie will be an pixel that is much more in background than your hand. This way you can get boundaries in every direction for your hand. You just need to analyze pixel data
Hope it helped
Try this download, it is Kinect finger detection, and get the "Recommended" download on the page, I have found the .dlls in it work well. Good Luck!

XNA 2d arcade game sprite follow

I am going to make a game like XNA example game "Platformer1" which comes with the XNA. But I need longer levels which doesn't fit in the screen (like Super Mario levels). How can I manage this kind of level? Do I need to use a 2d camera that follows the sprite? If I do this way how can I load the level? I am a bit confused and I am not sure if I could explain my problem clearly. Hope someone can help?
The tutorial based on Platformer Starter Kit in MSDN has a step Adding a Scrolling Level which guides you through creation of longer levels. The tutorial is very detailed, I highly recommend it.
I couldn't find the tutorial in the section for XNA Game Studio 4.0, but differences should be minimal. According to the comment at the bottom of the page, all you need to change is replace
spriteBatch.Begin(SpriteBlendMode.AlphaBlend, SpriteSortMode.Immediate, SaveStateMode.None, cameraTransform);
with
spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend, SamplerState.LinearClamp, DepthStencilState.Default, RasterizerState.CullCounterClockwise, null, cameraTransform);
in the tutorial code.
If you want to create a side scrolling game, then I would look into parallax scrolling. A quick google/bing will help you find lots of tutorials. Also, another useful tip is to search YouTube for XNA videos has we a lot of posters share their source code .
Here is a link to Microsofts Parallax Scrolling.
Sounds like you have a few problems ahead of you.
But I need longer levels which doest'n fit in the screen(like super mario levels). How can I manage this kind of levels.
There are several ways to do this, but a fairly easy way would be to have a 2d array (or sparse array, depending on how large your levels are) of a class named Tile that stores info about the tile image, animation, ...whatever.
Yes, you'll probably want a "camera". This can be as simple as only drawing a certain range of that array or a more featured camera that uses transforms to zoom out and translate across your level.
Hopefully this will help get you started.
I've done a decent amount of work in XNA, and from my experience, there are 2 ways to draw a 2D scene:
1) Strictly 2D. This method is much easier, but has a few limitations. There is no "camera" per se, what you do is move everything underneath the fixed 2D "camera". I say "camera" in quotes because the camera is fixed (as far as I know). The upside is that it's easy, the downside is that you can't easily zoom in or out or do other camera effects.
2) 2D in 3D. Set up a 3D world with a 2D plane. This is more flexible, but is also more challenging to work with because you will need to set up a 3D world and 3D camera. If this is your first attempt with making a game, I would highly recommend against this method.
I'm really only familiar with the strictly 2D method, and you would want a list of map objects that have a 2D coordinate. You would also want to store which section of the map you are looking at, I do this with a Rectangle or Vector2 that stores this. This value would move forward as the character moves. You can then take your 2D map objects' coordinate and subtract the (X,Y) of the top-left of what you are looking at to determine an object's screen position. So:
float screenX = myMapObject.X - focusPoint.X;
float screenY = myMapObject.Y - focusPoint.Y;
An other thing to note, use floats or Vector2/3 to store locations, you may not think it's required now, but it will be down the line.
It might be overkill, but my SF project uses XNA to draw a Strictly 2D scene that you can move around: http://sourceforge.net/projects/asteroidoutpost/
I hope this helps.
Have a look at Nick Gravelyns tutorials. They helped me tonne when I was first starting out - Really really worth a look for learning a lot on 2D games.
All the videos are now on youtube here

Categories

Resources