Retrieve texture coordinates on a 3D Model? - c#

Is it possible to retrieve the texture coordinates of an object, for example through hittesting?
As an example: I use a 1920x1080 texture on a simple plane, and I want to get the coordinates 1920, 1080 if I click in the right bottom. (The model is in reality slightly more complex, so trying to calculate the position via math isn't as easy)

When math does not work for some reasons, I used to do the following graphic hit-test: assign unique colors to each texel of your plane, then do one frame rendering to an offscreen surface with lighthing and effects disabled, then read pixel color under the cursor and translate its value back to coordinates. This is quite efficient on complex models when you don't need to do such lookups too often (say, games), because reading pixels back will stop graphics hardware pipeline and drain the performance. Also, this potentially would work with any projections: ortho or perspective.

Related

Allow non-pixel art game handle pixel art (integer movement)

I'm making a 2D platform game engine with C# and MonoGame that uses floating point numbers for all of its maths. It works well so far, however I'd like to have the option of using the engine for a retro-style pixel art game, which is obviously best done with integer maths. Is there a simple way to achieve this?
The simplest method I can come up with is to do all the calculations with floating point numbers, but then when I draw each sprite, I round the position to the nearest multiple of the scale of the pixel art (for example, to the nearest 5 pixels for pixel art that is scaled 5x). This functions, but the movement of the player character on the screen doesn't feel smooth.
I've tried rounding the player position itself each time I update it, but this breaks my collision detection, causing the player to be stuck on the floor.
Maybe there's a standard way people achieve a solution?
Thanks :)
Apologies for resurrecting an ancient question, but I think a very simple way to do this for both programmer and hardware using GPU triangle rasterization (I'm assuming you're using a GPU pipeline, as this is trivial otherwise) that most directly simulates the look and feel of old hardware rendering at 200p or so is just render your entire game to an offscreen texture that is that actual 200p resolution and perform your entire game logic, including collision detection, at that resolution. You can shift all the coordinates by half a pixel which, combined with nearest neighbor sampling, should get them to plot precisely at a desired pixel if you have to work in floating point.
Then just draw a rectangle with the offscreen texture to the screen scaled to the full display resolution using nearest neighbor sampling and integer-sized scalars (2x, 3x, 5x, etc). That should be so much simpler than scaling each individual sprite and tile.
This is assuming you want a full retro look and feel where you can't draw things at what would normally be sub-pixel positions (1 pixel increments after scaling 5x instead of 5 pixel increments, or even sub-pixel at full resolution). If you want something that feels more modern where the scaled up art would be able to transform (move, rotate, scale) and animate in single-pixel increments or even sub-pixel with floating point, then you do need to scale up every individual sprite and tile. I don't think that would feel so retro and low-res though, more like very modern with scaled up blocky pixel art. Operating at the original low resolution tends to impact not just the look of the game but the feel as well.

Segmenting Kinect body arms

I am trying to segment arms from a Kinect depth image in my app (click for larger picture):
I tried using joint positions to get the vector between elbow and wrist/hand-tip, and created a 2D bounding rotated rectangle between these two joints, and then removed all pixels outside the rectangle. The problem is that, depending on the distance from the sensor, this rectangle changes width, and can become trapezoidal (e.g. if hand is closer to the camera), so it can basically only allow me to discard parts of the image before doing actual processing.
When the hand is near the body (like my left arm below), I need to detect the edge of the hand - presumably by checking the depth gradient. But I couldn't find a flood fill algorithm which "stops" at gradients.
Is there a better approach perhaps? I could use an algorithm idea.

Compare two colors in an image + Which is best to draw 3D points

I know that it possible using the
GetPixel()
but is it possible to let the sw to decide which one is lighter or darker than the other.
I'm going to use a depthMap Image & I want to compare the colors of the pixels.
After that ,I'm going to create a 3D point for each pixel depends on it color range, if it's light, it would be in the front. & so on.
Also, which is the simplest, fastest & the best way to draw a 3d point: OpenGl or WPF ?! or other suggestion ?!
There are algorithms for calculating lightness using RGB values. As for the point drawing - it depends on you performance requirements. On how many points you need to draw per frame. So WPF may appear fast enough for you needs. The simplest solution may be a WPF Ellipse Shape wich is high level and as a result - slower. If it is not fast enough - you could go for a low level API, down to a Visual layer. The OpenGL and DirectX are even closer to hardware. At this level there is no such thing as Point. The graphics device operates with polygons and textures, so you may need to create a 1x1 pixel texture to represent you Point, create a quad and map this texture to a quad. Pretty complex stuff for drawing a Point.

Scale a Plane to fit a Frustum or Grid Cell

I am attempting to create a function taking a plane in 3d space, and returning a plane which will fit in its entirety inside one section of a grid on the screen.
The grid on the screen is fixed and is defined by either divisions in X and Y, or by a set of lines across the screen.
The original plane can be any size or orientation on the screen, though it will never take the whole screen.
I am working in Unity3.5.2f2 with C#. I have posted this on SO as it is quite heavily math based as opposed to just Unity general knowledge. Ideally a solution will not use external libraries, though it is a possibility.
I have a few methods in mind and would appreciate any input;
Project the plane to screen space, get the min/max x and y values of the mesh, (bounding box), use this to calculate a scale xform (using difference in height/length of mesh to that of a screen division). Re-project into world space, after snapping two edges of the mesh to a selected division.
As the divisions are rectangular in nature, create several view frustums, and come up with some method of scaling/translating the plane in 3d space to fit the frustum.
Function prototype would be;
Plane adjustPlaneToFitScreens(Plane _plane)
Any thoughts?
I solved this issue using method 01. above. Unity provided several handy functions making the math easy, and calculating scaling and translation in pixel/screen space was far easier than in 3d space while having to take into account view angle / FOV.
There are issues with the re-projection into world after the scaling, but this particular application doesnt have the camera moving when viewing the scaled object, so the issues are not actually noticeable in black box

3d Tiled terrain

I'm trying to create tiled terrain in 3D with XNA. I checked tutorials on how to doit(Riemers and Allens). Allens tutorial has an exact result I want to achieve, however I'm not sure about performance - it seems he is using single quadrilateral to draw all terrain and process it with pixel shader, it means - whole terrain will be processed each frame.
Currently I'm drawing a quadrilateral for each tile(Example) - it allows to draw visible tiles only, but it also means that much more verticies need to be processed in each frame and a lot of "DrawIndexedPrimitives" is called.
Am I doing it right or Allens way is faster? Is there a way to do tiled terrain better?
Thanks.
Totally depends on your terrain complexity and size. Typically, you will have terrain tiles with more than one quad/tile (for instance, a tile could consist of 4096 triangles) and then displace the vertices to get the terrain you want. Still, each tile will be a indexed primitive, but a single draw call will result in lots of triangles and a larger part of the terrain. Taking this idea further, you can make the tiles in the distance larger so you don't get too much detail (look for quad-tree/clipmap based terrain approaches; you'll get something like this: http://twitpic.com/89y5kn.)
Alternatively, if you can displace in the vertex shader, you can use instancing to further reduce the amount of draw calls. Per-instance, you pass the UV coordinates into your heighfield and the world-space position and then you again render high-resolution tiles, but now you may wind up with a single draw call for the whole terrain.
For a small game, you might want to generate only a few high-resolution tiles (65k triangles or so) and then frustum-cull them. That gives you a large terrain easily and is still manageable; but this definitely doesn't scale too well :) Depends on your needs.
For the texture tiles, you can also use a low-resolution index texture and do the lookup into an atlas per-pixel or just store the indices in the vertex buffer and interpolate them (this is very common: Store 4 weights per vertex and use it to look up into four different textures.)

Categories

Resources