I have an abstractly large IEnumerable container of Wpf Geometry objects; the boundary of the Geometry objects are non-trivial; they don't represent simple geometric shapes like rectangles or circles, they are complex polygons. The List will never change once initially populated.
I then have a point, and I want to determine which Geometry contains that point.
List<Geometry> list = getList();
var point = new Point(x,y);
list.Any(y => y.Bounds.Contains(point) && y.FillContains(point));
This code works, but its generally slow. The initial Bounds check is a short circuit that ends up being about 50% faster than without it. I think the next layer of complexity is to setup some sort of pre-rendered hit-map Dictionary.
Is there anything better that already exists in WPF to accomplish this task in a more performance oriented fashion?
I ended up creating a custom class that used the bounding box of each Geometry to perform a tiered lookup. The first tier used the simple bounding box calculation to narrow the list of Geometry objects that were necessary to search.
Each "bucket" was calculated using the average size of all Geometries in the collection. This has problems for the general case, but given that most of my Geometries are roughly the same size, this was a decent solution.
MSDN is best
How to: Hit Test Geometry in a Visual
How to: Hit Test Using Geometry as a Parameter
Related
I need a fast collection that maps 2D int-typed point to custom class in C#.
The collection needs to have:
Fast lookup (coords to custom class), adding a point if it does not exist
Fast remove range of key-point (outside of given rect). This actually rules out Dictionary<Point2D, ...>, as profiling found out this op is taking 35% of entire frame time in my sample implementation :-(
EDIT: To stress out: I want to remove all fields OUTSIDE of given rect (kill unused cache)
The coordinates can take any int-values (they are used to cache [almost] infinite isometric 2D map tiles that are near camera in Unity).
The points will be always organized in rect-like structure (I can relax this requirement to always follow rect, actually I am using isometric projection).
The structure itself is used for caching tile-specific data (like tile-transitions)
EDIT: Updated with outcome of discussion
You can use a sparse, static matrix for each "Chunk" in the cache and a cursor to represent the current viewport. You can then either use modulus math or a Quad tree to access each chunk, depending on the specific use case.
Old Answer:
If they are uniformly spaced, they why do you need to hash at all? You could just use a matrix of objects with NULL where is the default value if nothing is cached there.
Since you are using objects, the array is actually just references under the hood, the memory footprint of the array wouldn't really be affected by the null values.
If you truly need it to be infinite, you nest the matrices with a Quad Tree and create some kind of "Chunk" system.
I think this is what you need: RTree
I'm trying to construct a program in C# that generates a 3D model of a structure composed of beams, and then creates some views of the object (front, side, top and isometric).
As I don't need to draw surfaces (the edges are enough), I've been calculating each line to draw, and then do it with
GraphicObject.DrawLine(myPen, x1, y1, x2, y2)
This worked fine so far, but as I get adding parts to the structure, the refresh of GraphicObject takes too much time. So I'm getting into line visibility check to reduce the amount of lines to draw.
I've searched Wikipedia and some PDFs on the subject, but all I found is oriented by surfaces. So my question: Is there a simplified algorithm to check visibility of object edges, or should i go for a different approach, like considering surfaces?
Any suggestions would be appreciated, thanks for your help.
Additional notes/questions:
My current approach:
calculate every beam in a local axis (all vertices)
=> move them to their global position
=> create a list with pairs of points (projected and scaled to the view)
=> GraphicObject.DrawLine the list of point pairs)
would the whole thing be faster if I'd calculate the view by pixels rather than using the DrawLine method?
Screenshots follow with the type of structure it's going to do (not fully complete yet):
Structure view
Structure detail
There are 2 solutions to improve the performance.
a) switch the computation to the graphics card.
b) Use a kd-tree or some other similar data structure to quickly remove the non visible edges.
Here's more details:
For a), a lot of you computations are multiplying many vertices (vectors of length 3) by some matrix. The CPUs are slow because they only do a couple of these operations at a time. Switch to a GPU, for example using CUDA, which will allow you to do them more in parallel, with better memory access infrastructure. You can also use OpenGL/DirectX/Vulkan or whatever to render the lines themselves to skip having to get the results back from the graphics card and whatever other hiccups get introduced by windows code/libraries. This will help in almost all cases to improve performance.
For b), it only helps when you are not looking at the entire scene (in that case you really need to draw everything). In this cases you can store you scene in a kd-tree or some other data structure and use it to quickly remove things that are for sure outside of the view area. You usually need to intersect some cuboid with a pyramid/fustrum so there's more math involved.
As a compromise that should help in a large scenes where you want to see everything you can consider adjusting the level of detail. From your example, the read beans across are composed of 8 or so components. If you are far enough you are not going to be able to distinguish the 8, so just draw one. This will work great if you have a large number of rounded edges as you can simplify a lot of them.
I am creating large scale worlds using 16*16*16 voxel chunks which are stacked up to 32*32*32 in dimensions and I have hit a bit of a Bump in the road so to speak.
I want to create large structures that span 20+*20+*20+ chunks in volume which are created from procedurally generated structures as well as using templates for some of the content. Now I have an issue. The visual render range is up to 32*32*32 chunks and while I have up to maybe 40*40*40 chunks held in memory at a time when possible.
The structures can be anything like towns, dungeons and roads. I was thinking something like perlin worms for roads and just lay them over the terrain in the x,z and then analyze the path for bridges etc..
The structures and collection of structures need to be pre-generated before the player is within visual range or work more like perlin noise does for heightmaps (best solution). (to avoid the players seeing the generator at work). They also need to be consistent with the world seed every time.
I have thought about this a bit and have 2 possible solutions.
1) Generate the structures based on a point of origin for the structure generator.
This causes several issues though as even if I generate from the center of the structure, the structures can easily cross into the potential visual range of the player.
2) Pre-Generate "unreachable" chunks and then page them in and out in order to generate the structures using the above method.
This also seems rather unnecessary.
Both methods need to analyze the terrain in large quantities for a valid location to spawn the structures.
I was hoping somebody might have a more organic solution or even just a simpler solution that doesn't require me to "Look" so far ahead.
Thank you in advance.
EDIT:
I had an idea for dungeon generation in which I generate point clouds/nodes for rooms.
Steps:
1) When the generator finds a "node" it creates an x, y and z size to create a box basing it from the originator point of the room** (centre or corner of the room) and the room type.
**x,y,z relative to 0,0,0 worldspace calculated like so new Vector3((chunkX*16)+voxelX,(chunkY*16)+voxelY,(chunkZ*16)+voxelZ)
2) Once a room size is calculated, check for overlaps and if one is found do one of several things.
If the room overlap is high up lower it down till either the roof or the floor are flush. If the roof is flush build a stairs up to the room and remove the walls that intersect.
3) Look Down, North and East for a room maybe with a small cone and attempt to create a hallway between them.
This would probably work somewhat, especially if the center of the dungeon is the main hall/boss room.
This would be different for towns, cities, and surface dungeons. Still seems a little choppy though. Any ideas?
I faced a similar problem for a Minecraft mod I am writing. I want to have a number of overlapping "empires" which each create structures. But I don't want the structures to step on each other.
So, for this, I broke the world into arbitrary sized tiles. (Compare to your 32x32x32 regions.) I also came up with a "radius of influence". This is how far from the center point that it could create structures. Each tile had an instance of a provider class assigned to it with a unique seed.
Two methods on this class were provided for structure generation.
First, was a function that would return where it wanted to create structures. But only to the resolution of chunks. (Compare to your 16x16x16 block sets.) Each provider class instance had a priority, so in the case of two providers trying to rezz a structure in the same chunks, the higher priority one would win.
The second function would be passed a world instance, and one of the data items returned by the first function and would be asked to actually create it.
Everything pieces together like this:
We get a request to resolve a certain chunk of the world. We work out the provider for the tile the chunk is in, and then all the providers for all the tiles that are within the maximum radius of that tile. We now have every provider that could influence this chunk. We call the first function on each of them, if they haven't been called already, and register what chunks each of them has claimed into a global map.
At this point, we've consulted everything that could have an influence on this chunk. We then ask that registry if someone has claimed this chunk. If so, we call back into that provider (method #2) with the chunk and the world instance and get it to draw the bits for this part of its structure.
Does that give you enough of an idea for a general approach to your problem?
I want to compress many non-overlapping rectangles into larger rectangles When they are adjacent.
Pseudo-code for my current algorithm:
do
compress horizontally using sweep and prune
compress horizontal output vertically using sweep and prune
while (this output is small than previous output)
Here's a link to sweep and prune.
This is working well, but I want to know if there are approaches which result in fewer rectangles output. I figure there's more sophisticated than what I'm doing now.
So it sounds like your problem is that you have small gaps between the rectangles preventing them from being collected together into a single piece. If you have access to the source code for the sweep and prune method, you can add a buffer to the "overlap" test, but I think it would be more optimal to consider using an R-Tree. This will index the rectangular spaces without messing with limits on gaps etc.
R-Tree Wiki
Here is a relevant paper by Sellis et. al. describing the R+ tree:
http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=50ECCC47148D9121A4B39EC1220D9FB2?doi=10.1.1.45.3272&rep=rep1&type=pdf
here is a C# implementation of an R-Tree
http://sourceforge.net/projects/cspatialindexrt/
[Edit - After Comment 1]
So let me see if I can capture the current problem.
Rectangles are joined in passes of horizontal/vertical adjacency tests.
Rectangles are only joined if the adjacent boundary for both is equal.
The intermediate result of any join must also form a valid rectangle.
The result is non-optimal because the sequence of joining.
I think you're actually looking for the minimum dissection into rectangles of a rectilinear polygon. The first step would be to join ALL the touching rectangles together, regardless of whether they form a rectangle or not. I think you are getting caught up in problems with the intermediate stages of each step in the process also needing to be complete rectangle deconstructions, leading to a sub-optimal result. If you merge them together into a single rectilinear polygon, you can use graph theory mechanisms.
You can check out Graph-Theoretic Solutions
to Computational Geometry Problems by David Eppstein
Or investigate Algorithm for finding the fewest rectangles to cover a set of rectangles without overlapping by Gareth Rees
I have a grid of 3D terrain, where each of the coordinate (x,y,z) of each grid are known. Now, I have a monotonously increasing/ decreasing line, which its start point is also known. I want to find the point where the terrain and the line meets. What is the algorithm to do it?
What I can think of is to store the coordinate of the 3D terrain in a nxn matrix. And then I would segmentize the line based on the grid in the terrain. I would then start with the grid that is the nearest to the line, and then try to compute whether that plane intersects with the line, if yes, then get the coordinate and exit. If no, then I would proceed to the next segment.
But is my algorithm the best, or the most optimum solution? Or is there any existing libraries that already do this?
A different approach would be to triangulate the terrain grid to produce a set of facets and then intersect the line with those.
Obviously you'd need to do some optimisations like only checking those facets that intersect the bounding box of the line. You can do a quite cheap/quick facet bounding box to line bounding box check which will discount most of the triangles in the terrain very quickly.
If you arrange your triangles in to an octree (as #sum1stolemyname suggested but for the points) then this checking can be done from the "top down" and you should be able to discount whole sections of the the terrain with a single calculation.
Not directly and optimisation, just a few hints:
If your grid is large, it might be worthwhile to build an octree from your terrain in order to quickly reduce the number of grid nodes you have to check your line against. This can be more efficient in a huge grid( like 512*512 ndoes) since only the leafnodes your ray is passing through have to be considered.
Additionally, the Octree can be used as a means to decide wich parts of your grid are visible and therefore have to be drawn, by checking which leave-nodes are in the viewing frustum.
There is a catch, though: building the Octree has to be done in advance, taking some time, and the tree is static. It can not be easyly modified after it has been constructes, since a modification in one node might affect several other nodes, not necessarily adjacent ones.
However, if you do not plan to modify your grid once it is created an octree will be helpful.
UPDATE
Now that i understand how you are planning to store your grid, i believe space partitioning will be an efficent way to find the nearest neighbour of the intersection line.
Finding the nearest Neighbour linearly has a runtime complexity of O(N), while space-partitioning appoaches have an average runtime complexity if O(log N).
If the terrain is not built via a nice function you will have to do a ray trace, i.e. traverse the line step by step in order to find an intersection. This procedure can take some time.
There are several parameters for the procedure. E.g. there is the offset you walk alogn the line in each step. If you take an offset too large, you might leave out some "heights" of your terrain and thus not get the correct intersection. If the offset is to small, it will slow down your procedure.
However, there is a nice trick to save time. It's described here resp. here. It uses some kind of optimization structure for the terrain, i.e. it builds several levels of details the following way: The finest level of detail is just the terrain itself. The next (coarser) level of detail contains just a forth of the number of original "pixels" in the terrain texture and combines 4 pixels into one, taking the maximum. The next level of detail is constructed analoguesly:
. . . .
... . ... .. .
....... .... .. .
........ => .... => .. => .
01234567 0246 04 0
1357 26 4
fine => => => => => coarse
If now the ray cast is performed, first of all, the coarser levels of detail are checked:
/
/
/.
.
.
.
If the ray already misses the coarse level of detail, no finer level has to be examined. That's just a very rough idea how the optimisation works. But it works quite well. Implementing it is quite a bunch of work, but the paper is a good help.