WPF: Finding 3D-visuals that are partially inside a 2D rectangle - c#

I am making a WPF program with the possibility to modify data graphically in 3D. In order to give the user the option to select multiple graphical objects at the same time, I want to implement a selection rectangle. (Just like the one in windows explorer.) A common functionality in programs like this one is to have 2 different functions for the selection rectangle, and that the user can somehow choose which of the methods should be used.
Any object that is partially or completely inside the rectangle is selected.
Only objects that are completely inside the rectangle are selected.
The 2nd method is straight forward by using the bounding box of each object, and check if it is inside the rectangle. The 1st one on the other hand, seems to be quite some work. All my graphical objects are complicated 3D figures, and can be rotated by the user in any way. At the moment I am unable to find any other way than checking if any of the triangles in the mesh of any of the objects cross my 2D rectangle, and that can be quite time consuming.
I have little experience with WPF 3D, but I have done this before in OpenGL. Then I could tell OpenGL to draw a specific area of the screen, and the collect a list of objects that was visible in the specific area. All I needed to get the functionality I wanted was about 5 lines of code.
I guess my question is this:
Is there a way to do this with WPF 3D, similar to the OpenGL approach?
If not, is there any other smart way to find all objects (Visual3D) in a viewport that is partially behind a 2D rectangle?
I refuse to believe I am the only one with this kind of problem, so I hope a clever mind can point me in the right direction.
Regards,
Sverre
Thank you for your answer!
The 2D-rectangle is just in front of the camera and extending infinitely forward. I want to get any object that is partially or completely inside that frustum.
The camera we are using is an orthographic or perspective projection camera (System.Windows.Media.Media3D.ProjectionCamera). The reason we are not using the matrix camera is that we are using a 3rd party tool that does not support the matrix camera. But I am sure there is a way to get the matrix from a projection camera as well, so that is hopefully not the problem.
In theory your solution sounds like just what we need, but I am not sure how to proceed. Do you have any links to sample-code, or can you give some more hints on how to actually implement this?
Btw: Since we are working with WPF, we do not have direct access to DirectX. At least that’s what we have concluded after some research. You mention use of the z-buffer, which we haven’t been able to access through WPF. If you know a way to access the z-buffer, it’s greatly appreciated! This is of-topic, but we have struggled to disable the z-buffer for some time, but have given up…
Best regards,
Sverre

Is your intersection region a 2d rectangle or a frustrum based at a 2d rectangle and extending infinitely forward (or perhaps to some clipping limit)? If it can be construed as a viewing frustrum, then you can leverage the existing capabilities of the graphics system to render the scene using a Camera View and Projection that corresponds to your originating rectangle, with all lighting and shading disabled and colors chosen specifically to 'tag' the different objects in your scene. This means you can use the graphics hardware to perform the clipping/projection as a 'rendering' operation, then simply enumerate the pixel values as 'tags' to determine the objects present in the rectangular view.
If you need to restrict selection to an actual 2d slice (or a very shallow frustrum), you can use the Z-buffer (if you can get access to it) to exclude tagged pixels that are outside the Z range of your desired selection frustrum.
The nice thing about this approach is that you probably already have the Camera matrix (it's the same matrix used for your window for selection) and only need to change the Projection matrix to be a sub-set of the viewing window.

A 'smart' way would be to transform the rectangle into a box using the Camera's matrix
And then do a intersection of all the objects and the box.

Related

Optimal 2D Tile rendering in C#

So I'm making a simple 2D Sidescrolling game in C# however I've found that using Graphics.drawImage doesn't particularly allow me to update the tiles as I wish. For example, I tell it to draw the image and it stays where I tell it to be. I want to be able to move the entire scene left to right. This would be easier if I had to use a for loop or something and define it's position every time it draws the image.
This may be confusing and I'm certain there's a way of doing it, I just don't know how.
So my question to you is: How can I control the positioning of each rectangle drew on a form so that I can scroll the entire scene to the left when I wish?
I'm certainly no game/graphics expert, so take this with a grain of salt.
A couple things spring to mind.
First, you could pre-render the entire level as one bitmap, then just paint the relevent portion into your picturebox.
Second, the Graphics class has a Transforms property (I may have the name wrong). You could add a linear transformation to the graphics object so the painting coordinates of your tiles wouldnt change, but the graphics object would slide, or I think the graphics term is "translate" the output en-masse.

Isometric Drawing - filling in the blanks

Im trying to write a small application that makes isometric shapes. Ive been able to make
a cube / cuboid shape in the picture, but i need to be able to fill in the face of the cubes.
http://peterfleming.net84.net/cube.PNG
I used a drawLine method and mapped points but i was hoping there was an easier, perhaps a drawRectangle method that i could use instead and then fill them in, but so far i havent been able to find any, perhaps because of the isometric nature of the shape.
Any help would be appreciated
What you're after can be easily done with the Polygon class.

List Coordinates of Pixels Inside a Triangle

I'm making a game in C# and XNA, and I was trying to come up with a method to render massive terrains without using a tremendous amount of memory or passing the poly limit hard-coded into XNA.
My solution so far is to create a massive heightmap, and that heightmap is loaded into memory at the beginning of the game in the initialization phase. Then, terrain is only generated nearest to the camera. This is accomplished by projecting a triangle whose vertex is the character and the other two endpoints extend to the sides of the character's viewing area. Then, all the pixels inside that triangle on the heightmap are rendered and drawn into the game, thus only rendering what is seen.
The problem is, I've successfully found (I think, can't test until I get terrain rendering) the three vertices of the triangle. Now I need to find a list of the coordinates for every single pixel inside that triangle - whole numbers only, because I just need a list of pixels to render.
I know it sounds a little confusing, so here's the gist of it:
I have an image, and I project a triangle onto that image. The only thing I know about that triangle are the three vertices. I need a list of the pixels inside that triangle.
I've been Googling around for maybe 20 minutes now, and I figured I midas well go ahead and post something here due to the fact that what I'm trying to do isn't all that common. If I find an answer, I'll be sure to post it here.
But until then, can anyone tell me how to accomplish this?
Edit: A formula, please. If you can provide a formula or algorithm, and an explanation, that would be just perfect.
Edit: I've posted a new question, as I've ditched this method of rendering large terrains. The question is here.
Start here:
http://mathworld.wolfram.com/TriangleInterior.html
One of the non-trivial problems, not mentioned there, that you have to deal with is the pixelization along the boundary.

.NET high level graphics library

I am programming various simulation tools in C#/.NET
What I am looking for is a high level visualization library; create a scene, with a camera with some standard controls, and render a few hunderd thousand spheres to it, or some wireframes. That kind of thing. If it takes more than one line to initialize a context, it deviates from my ideal.
Ive looked at slimDX, but its way lower level than im looking for (at least the documented parts, but I dont really care for any other). WPF perspective looked cool, but it seems targeted at static XAML defined scenes, and that doesnt really suit me either.
Basically, im looking for the kind of features languages like blitzbasic used to provide. Does that exist at all?
I'm also interested in this (as I'm also developing simulation tools) and ended up hacking together some stuff in XNA. It's definitely a lot more work than you've described, however. Note that anything you can do in WPF via XAML can also be done via code, as XAML is merely a representation of an object hierarchy and its relationships. I think that may be your best bet, though I don't have any metrics on what kind of performance you could expect with a few hundred thousand spheres (you're absolutely going to need some culling in that case and the culling itself may be expensive if you don't use optimizations like grid partitioning.)
EDIT: If you really need to support 100K entities and they can all be rendered as spheres, I would recommend that you bypass the 3d engine entirely and only use XNA for math. I would imagine an approach like the following:
Use XNA to set up Camera (View) and Perspective matrices. It has some handy Matrix static functions that make this easy.
Compute the Projection matrix and project all of your 'sphere' origin points to the viewing frustrum. This will give you X,Y screen coordinates and Z depth in the frustrum. You can either express this as 100K individual matrix multiplications or multiplication of the Projection matrix by a single 3 x 100K element matrix. In the former case, this is a great candidate for parallelism using the new .NET 4 Parallel functionality.
If you find that the 100K matrix multplications are a problem, you can reduce this significantly by performing culling of points before transformation if you know that only a small subset of them will be visible at a given time. For instance, you can invert the Projection matrix to find the bounds of your frustrum in your original space and create an axis-aligned bounding box for the frustrum. You can then exclude all points outside this box (simple comparison tests in X, Y and Z.) You only need to recompute this bounding box when the Projection matrix changes, so if it changes infrequently, this can be a reasonable optimization.
Once you have your transformed points, clip any outside the frustum (Z < 0, Z > maxDist, X<0, Y<0, X>width, Y>height). You can now render each point by drawing a filled circle, with its radius proportional to Z (Z=0 would have largest radius and Z=maxDist would probably fade to a single point.) If you want to provide a sense of shading/depth, you can render with a shaded brush to very loosely emulate lighting on spheres. This works because everything in your scene is a sphere and you're presumably not worried about things like shadows. All of this would be fairly easy to do in WPF (including the Shaded Brush), but be sure to use DrawingVisual classes and not framework elements. Also, you'll need to make sure you draw in the correct Z order, so it helps if you store the transformed points in a data structure that sorts as you add.
If you're still having performance problems, there are further optimizations you can pursue. For instance, if you know that only a subset of your points are moving, you can cache the transformed locations for the immobile points. It really depends on the nature of your data set and how it evolves.
Since your data set is so large, you might consider changing the way you visualize it. Instead of rendering 100K points, partition your working space into a volumetric grid and record the number (density) of points inside each grid cube. You can Project only the center of the grid and render it as a 'sphere' with some additional feedback (like color, opacity or brush texture) to indicate the point density. You can combine this technique with the traditional rendering approach, by rendering near points as 'spheres' and far points as 'cluster' objects with some brush patterning to match the density. One simple algorithm is to consider a bounding sphere around the camera; all points inside the sphere will be transformed normally; beyond the sphere, you will only render using the density grid.
Maybe the XNA Game studio is what you are looking for.
Also take a look at DirectX.
WPF perspective looked cool, but it seems targeted at static XAML defined scenes
Look again, WPF can be as dynamic as you will ever need.
You can write any WPF program, including 3D, totally without XAML.
Do you have to use C#/.Net or would MonoDevelop be good enough? I can recomend http://unity3d.com/ if you want a powerful 3D-engine.

Hiding objects that obscure the player in a 3D scene

I'm designing a 3D game with a camera not entirely unlike that in The Sims and I want to prevent the player character from being hidden behind objects, including walls, pillars and other objects.
One easy way to handle the walls case is to have them face inwards and not have an other side, but that won't cover the other cases at all.
What I had planned is to somehow check for objects that are "in front" of the player, relative to the camera, and hide them - be it by alpha blending or not rendering at all.
One probably not so good idea I had in mind is to scan from the camera to the player in a straight line and see if you hit a non-hidden object, continuing until you reach the player. Unfortunately, I am an almost complete newbie on 3D programming.
Demonstration SVG illustration < that wall panel obscures the player, so it must be hidden. Another unrelated and pretty much already solved problem is removing all three wall panels on that side, which is irrelevant to this question and only caused by the mapping system I came up with.
What I had planned is to somehow check for objects that are "in front" of the player, relative to the camera, and hide them - be it by alpha blending or not rendering at all.
This is a good plan. You'll want to incorporate some kind of bounding volume onto the player, so the entire player (plus a little extra) is visible at all times. Then, simply run the intersection algorithm for each corner of the bounding volume.
Finding which object is at a given point on screen is called picking. Here's an XNA link for you which should direct you to an example. The idea is that you retrieve the 3D point in the game from the 2D point, and then can use standard collision detection methods to work out which object is occupying that space. Then you can elect to render that object differently.
One hack which might suffice if you have trouble with the picking approach is to render the character once as part of the scene, and then render it again at the end at half-alpha on top of everything. That way you can see the whole character and the wall, though you won't see through the wall as such.
One easy way, at least for prototyping, would be to always draw the player after you draw the rest of the scene. This would ensure that the player is rendered on top of anything else in the scene. Crude but effective.
Create a bounding volume from the camera to the extents of the player, determine what objects intersect that volume, and then render them in whatever alternate style you want?
There might be some ultra-clever way to do this, but this seems like the pretty straightforward version, and shouldn't be too much of a perf hit (you're probably doing collision every frame anyway....)
The simplest thing I can think of that should work relatively well is to model all obstacles by a plane perpendicular to your ground (assuming you have a ground.) Roughly assuming everything that is an obstacle is a wall with some height.
Model your player as a point somewhere, and model your camera as another point. The line in 3d that connects these two points lies in a plane that is particularly interesting to you, because if this plane intersects an "obstacle plane" below the height of the obstacle, that means that that obstacle is blocking your view of the player point.
I hope thats somewhat clear. To make this into an algorithm you'd have to implement a general method for determining where two planes intersect (to determine if the obstacle is tall enough to block view.)

Categories

Resources