I have to draw a map with Managed DirectX. The map arrived in MapInfo format (lines, polylines, regions/polygons). Polygons are already triangulated (done with GLUtesselator).
The idea:
GPS Coordinates are converted to x,y points (Mercator Projection)
I use PositionColored VertexFormat
Center of the view is [x,y] (can modify by mouse move)
Camera is always positioned to [x,y,z] where z is the zoom (-100 by default, can modify by mouse wheel)
Camera target is: [x,y,0], camera up: [0,1,0]
The layers of the map are positioned by Z (+1.0, 0.99, 0.98, 0.97...etc)
I can already do:
Draw lines and polylines
Draw one layer of polygons
My problem is: when I want to draw all layers I see only one of them. I think there is some problem with z ordering. What should I do to solve this? Modify the RenderState? The best would be if I could draw as in GDI (first in the back, last in the front).
Other question: how can I get the coordinate of a pixel under the mouse cursor? (in the GDI version of the map I could do it because I used my own viewport for rendering, but now directx do everything)
Thanks!
If your map is purely 2D, make sure that Z buffering is turned off. Once it is, things will display in the order you draw them in.
Related
I am trying to segment arms from a Kinect depth image in my app (click for larger picture):
I tried using joint positions to get the vector between elbow and wrist/hand-tip, and created a 2D bounding rotated rectangle between these two joints, and then removed all pixels outside the rectangle. The problem is that, depending on the distance from the sensor, this rectangle changes width, and can become trapezoidal (e.g. if hand is closer to the camera), so it can basically only allow me to discard parts of the image before doing actual processing.
When the hand is near the body (like my left arm below), I need to detect the edge of the hand - presumably by checking the depth gradient. But I couldn't find a flood fill algorithm which "stops" at gradients.
Is there a better approach perhaps? I could use an algorithm idea.
Hi is there any way to get the X,Y,Z of mouse in direct3d after I translate and rotate the world matrix?
The mouse doesn't have a Z coordinate because it's not a three-dimensional pointing device.
The best you can do is project the mouse's (x,y) coordinate on the screen through the viewing frustum to determine which portion of the viewing frustum correlates to the pixel position under the mouse cursor.
DirectX is completely unaware of mouse and any other input devices. It just is not what it cares about.
To get x and y coordinates you call Win32 API functions (this depends on framework you are using)
To get a z coordinate, you must implement Ray Picking. There is no uniform way, as this depends on how picked objects are implemented. Here are some tutorials on XNA Picking.
I plot a surface in a Plot Cube with TwoDMode = true, when I try to zoom using the mouse left drag, the selection zoom rectangle goes behind the surface, therefore, it is not properly shown. Is it possible to force the selection rectangle to be on top of the surface? Moreover, is it possible by hovering or clicking the mouse on the surface, the X, Y and Z values be shown in some textboxes? Thank you very much.
Surfaces are inherently 3D objects. By default, they are intended to be used with ILPlotCube.TwoDMode set to false. But you can try to access the selection rectangle object and modify it accordingly. Try starting with plotCube.ZoomRectangle.Lines.Positions by raising its Z coordinate in order to move it closer to the camera.
Archieving the point of the surface under the cursor is not easy - but doable. Keep in mind, only the vertices of the surface tiles are known explicitly. You can use picking and the mouse events to get informed, if the mouse is over the surface:
surface.MouseMove += (_s,_a) => { yourHandler(_a); }
Afterwards, you are on your own. First, you will have to find the actual surface 3D coordinates. If you can be sure that the surface has not been rotated, you can take a look here.
The method in that thread gives you the surface X and Y coordinates. You can go further and (manually) find the corresponding tile for that position. For the final and exact X,Y,Z coordinates, you would have to interpolate the tile (triangle) vertices to the actual mouse position, using barycentric interpolation.
In order to show the 3D coordinate, you can simply use an ILLabel. You may or may not want to put that into an ILScreenObject.
what is the best way to position the Camera in a way that i can see what i paint in a certain region?
p.e. I'm painting a rectangle at around 300,400,2200. Where do i have to place the camera and which view do i have to set so that everything fits "in"?
Is there a trick or a special method or do i have to try it out with different camera positions?
There is no standard function that will position the camera this way because there are many options (think of different sides and rotations)
A trick you could use is:
Take the center of the MeshGeometry3D by using the Bounds property and add the normal vector several times to position the Camera.
Then use the normal vector of the plane, invert it and use it as the LookDirection for the camera.
How far you need to move the camera depends on the view angle of the camera. It can be calculated. Let me know if you want to know how (it will take me a little extra time)
More information can be found here too
Here's the setup: This is for an ecommerce art site where some paintings are canvas transfers. The painting wraps around the sides and top and bottom of the canvas. We have high-res images of the entire painting, but what we want to display is a quasi-3D representation of the image in which you can see how the sides of the painting wrap around the canvas. Here's a rough sketch of what I'm talking about:
My question is, how can I rotate an image in 3D space? The approach I think I'd like to take, is to cut off a portion of the top and side of the image, and rotate then in 3D and then stich it back on to the top and side to give it the 3D look. How do I go about about doing that? It can be done using any .Net technology (GDI+, WPF etc.).
In WPF using the ViewPort3D class you can create a cuboid which is 8x5x1 units. Create the image as a texture and then apply the texture to the front face (8x5) and the side faces (5x1) and the top and bottom faces (8x1) using texture coordinates. The front face coordinates should be: (1/9, 1/6), (8/9, 1/6), (1/9, 5/6) and (8/9, 5/6) for the front face, and from the nearest edge to those coordinates for the sides, e.g. for the left side: (0, 1/6), (1/9, 1/6), (0, 5/6) and (1/9, 5/6) for the left side.
Edit:
If you then want to be able to perform rotations on the 3D canvas model you can follow the advice here:
How can I do 3D transformation in WPF?
It looks like you're not needing to do real 3D, but only needing to fake it.
Chop off four strips along the top, bottom, left and right of the image. Toss the bottom and right (going by your sketch in the question). Scale and shear the strips (I'm not expert enough at .net/wpf to know how, but it can do it). The top would be scaled vertically by a factor of 0.5 (a guess - choose to fit the desired final 3D-looking image) and sheared horizontally. The result is composited onto the output image as the top side of the canvas. The left strip would be scaled horizontally and sheared vertically.
If the end user is to view the 3D canvas from different angles interactively, this method is probably faster than rendering an honest 3D model, which would have to do texture mapping and rasterizing the model into a final image, which amounts to doing the same math. The fun part is figuring out how to adjust the scaling and shearing parameters.
This page might be educational: http://www.idomaths.com/linear_transformation.php
and this could be useful reference http://en.csharp-online.net/GDIplus_Graphics_Transformation%E2%80%94Image_Transformation
I dont have any experience in this kind of stuff. But when i saw this question, the first thing comes to my mind is the funny Unicornify for SO.
In this making of article by balpha, he explained how the 2d unicorn sphere is rotated in 3d space.
But the code is written in python. If you are interested, you can take a look into that. But am not exactly sure this would help you.
The brute force approach (which might be the easiest approach), is to map the u,v texture coordinates for each of the three faces, onto three billboards representing three sides of the canvas (a billboard is just two triangles that make a rectangle). Then, rotate the whole canvas (all three billboards) using matrix transforms. Tada!
Alternately, you can move the 3-space camera position with a transform, rather than the canvas. Six of one, half the other - as they say.