I'm writing a piece of simulation software, and need an efficient way to test for collisions along a line.
The simulation is of a train crossing several switches on a track. When a wheel comes within N inches of the switch, the switch turns on, then turns off when the wheel leaves. Since all wheels are the same size, and all switches are the same size, I can represent them as a single coordinate X along the track. Switch distances and wheel distances don't change in relation to each other, once set.
This is a fairly trivial problem when done through brute force by placing the X coordinates in lists, and traversing them, but I need a way to do so efficiently, because it needs to be extremely accurate, even when the train is moving at high speeds. There's a ton of tutorials on 2D collision detection, but I'm not sure the best way to go about this unique 1D scenario.
Apparently there's some confusion about what my data looks like.
I'm simulating a single site, not an entire region. The trains can be of any length, with different types of cars, but there is only ever one train. My train data is in the form {48,96,508,556,626,674,...}, indicating the distances from the front of the train (0) to the center of the axle.
(Train data will more likely come to me in the form of an ordered list of Car objects, each of which has a length and a list of integers representing axle distances from the front of that car, but it all gets aggregated into a single list, since all axles are the same to me.)
My switches are all within several hundred feet, and will often be entirely covered by the train, The switches can be at any interval from hundreds of feet to several inches apart, and is in the same form as the train: {0,8,512,520,...}, indicating the distances from the beginning of the site to the center of the switch.
Finally, I know the distance at which the wheel activates the switch, in inches.
For example, using the above sample data, and a an activation distance of 8 inches, the first switch at X=0 would activate when the train hits X=40, meaning the train is 40 inches into the site. When the train hits X=48, the switch at X=8 is also activated. At X=56, the first switch goes off, and at X=64, the second switch also goes off. Different axles are turning different switches on and off as it crosses the site.
The train is usually running at speeds under 10 mph, but can go much higher. (Right now our simulation is capped at 30 mph, but higher would be great.)
Have a sorted list of all the switches' coordinates and use binary search on the list to find the nearest switch. Then check to see how far it is and whether or not it's a collision.
O(log n)
Another option is to exploit the fact that the train moves along the track and can only ever come close to two switches, one behind and one ahead.
Construct a doubly-linked list of all the switches and position an extra node to represent the train in the correct location in the linked list. Then only check proximity to the switch the train is headed towards.
O(1)
To save memory, store the sorted coordinates in an array and simply keep track of which indexes the train is between.
Pre-process your switch locations and sensitivity range into a list of track segments. Each segment has a length, and between each segment a set of switch 'on' or 'off' events.
switch_on ( 0 ), ( length: 8 ), switch_on ( 1 ), // x = zero here
segment ( length: 8 ), switch_off ( 0 ),
segment ( length: 8 ), switch_off ( 1 ),
segment ( length: 488 ), switch_on ( 2 ),
segment ( length: 8 ), switch_on ( 3 ),
segment ( length: 8 ), switch_off ( 2 ),
segment ( length: 8 ), switch_off ( 3 ),
...
For each axle, have its current location also represented along with the track segment it is on.
If you're doing an event based simulation, the next event should be scheduled for the min value of the distance from an axle to the end of its current track segment. This is independent of the train speed, and accurate (you won't miss switches if the train goes faster). Store the events in a heap if necessary (it's often not worth it for less than 30 or so, profile the event scheduling if necessary).
Processing an event will be O(no-of-axles). Most steps will involve one or two switch state changes and a position update. At each event, one axle will cause one switch to go on or off (switches which would be simultaneous according to the data cause two events, zero time apart), and all axle times to the end of their segments need to be compared. You can either assume that all axles travel at the same speed or not; it doesn't matter as far as processing the events, it only makes the calculation of the time to reach the next switch specific to the axle in question.
If you're on a fixed time step simulation, then process all events which would have occurred up to the time at the end of the step, then one event to move the axles to the point they reach at the end of the step.
Store the switch list as a doubly-linked list as indicated by Ben.
Keep a pointer in the wheel object (or structure, assuming there is one) to the next switch and the previous switch relative to your current position. Intialize these as the wheel is placed on the track.
As you move over each switch, swap out the "next" and "previous" switches in your wheel object for the new "next" and "previous" that can be quickly obtained by examining the doubly-linked list.
This avoids all searches, except possibly initial placement of the wheel.
Additionally, the "switch" structure could be used to hold a proximity pointer back to all of the wheels that list it as "previous" or "next". (There's a mutex here, so be careful of who updates this.) This can provide a quick update of who's approaching any given switch and their distance from it.
Assuming that Axle-to-Axle distances are always larger the activation distance, and that routes don't change frequently after the train enters, you should be able to speed things up with pre-calculation. Basically, for each switch, calculate a list of train travel distances at which it will toggle, then walk through the lists as the train advances.
Pseudocode:
axles = {48,96,508,556,626,674,...}
switches ={0,8,512,520,...}
activate = 8
float toggledist[num_switches]
boolean switchState[num_switches]={false,false,false,...}
int idx[num_switches]
for (i in switches)
n = 0
for (a in axles)
toggledist[n++] = switches[i]+axles[a]-activate
toggledist[n++] = switches[i]+axles[a]+activate
travel= 0.0f;
each (cycle)
travel += TrainVelocity*time;
for (i in switches)
while (trigger>=toggledist[idx[i]])
switchState[i]=!switchState[i];
//additional processing for switch change here, if needed
idx[i]++;
Related
I am trying to build a top down view spaceship game which has destructible parts. I need to simulate the process of depressurization in case of hull breach.
I have a tiled map which has the room partitioning code setup:
What I am trying to do is build some kind of a vector field which would determine the ways the air leaves depressurized room. So in case you would break the tile connecting the vacuum and the room (adjacent to both purple and green rooms), you'd end up with a vector map like this:
My idea is to implement some kind of scalar field (kind of similar to a potential field) to help determine the airflow (basically fill the grid with euclidean distances (taking obstacles into account) to a known zero-potential point and then calculate the vectors by taking into account all of the adjacent tiles with lower potential value that the current tile has:
However this method has a flaw to where the amount of force applied to a body in a certain point doesn't really take airflow bottlenecks and distance into account, so the force whould be the same in the tile next to vacuum tile as well as on the opposite end of the room.
Is there a better way to simulate such behavior or maybe a change to the algorithm I though of that would more or less realistically take distance and bottlenecks into account?
Algorithm upgrade ideas collected from comments:
(...) you want a realistic feeling of the "force" in this context, then it should be not based just on the distance, but rather, like you said, the airflow. You'd need to estimate it to some degree and note that it behaves similar to Kirchoff rule in electronics. Let's say the hole is small - then amount-of-air-sucked-per-second is small. The first nearest tile(s) must cover it, they lose X air per second. Their surrounding tiles also must conver it - they lose X air per second in total. And their neighbours.. and so on. That it works like Dijkstra distance but counting down.
Example: Assuming no walls, start with 16/sec at point-zero directing to hole in the ground, surrounding 8 tiles will get 2/sec directed to the point-zero tile. next layer of surrounding 12 tiles will get something like 1.33/sec and so on. Now alter that to i.e. (1) account for various initial hole sizes (2) various large no-pass-through obstacles (3) limitations in air flow due to small passages - which behave like new start points.
Another example (from the map in question): The tile that has a value of zero would have a value of, say, 1000 units/s. the ones below it would be 500/s each, the next one would be a 1000/s as well, the three connected to it would have 333/s each.
After that, we could base the coefficient for the vector on the difference of this scalar value and since it takes obstacles and distance into account, it would work more or less realistically.
Regarding point (3) above, imagine that instead of having only sure-100%-pass and nope-0%-wall you also have intermediate options. Instead of just a corridor and a wall you can also have i.e. broken window with 30% air pass. For example, at place on the map with distance [0] you've got the initial hole that generates flux 1000/sec. However at distance [2] there is a small air vent or a broken window with 30% air flow modifier. It means that it will limit the amount from incoming (2x500=1000) to 0.3x(2x500)=300/sec that will now flow further to the next areas. That will allow you to depressurize compartments with different speeds so the first few tiles will lose all air quickly and the rest of the deck will take some more time (unless the 30%-modifier window at point [2] breaks completely, etc).
I need to find optimal path between a number of points on a 2d map.
The 2d map is of a building and will simply have where you can not go (through walls) and all the points on the map. So it's not really a map, rather lines you cannot go through with points to pass through.
I have a number of points, say between 20 and 500
I start with one that I select and then need the route calculated for most optimal path.
I would love hints for where to look for this travelling salesman problem with obstacles. Or even better, done library for doing it.
Bonuses
Things like doors can be weighted as they are less fun to pass through back and forth.
Possibility of prioritizing/Weighting the ability to end close to where you started.
Selecting areas as passable but annoying (weighting down)
.Net/C# code that I can use, I want to use this both on .NET MVC project and Xamarin mobile project so .net code would be great (if code exists)
Update example
In my example here we have an office. Now I have not thought every detail out so this is merely an example.
All the purple dots need to be checked
Yellow area could mean annoying to pass through but doable
Red could mean not active but can be passed if no other option exists.
Blue (walls) are impenetrable and can not be passed.
Green is doors, weighted down possibly as it's annoying to go trough closed doors (usually this would probably make sense anyway as the dots in a room would be easiest to check together.
The user would go to one dot, check it, then the software should tell him which one to do next until he is done.
Bonus could be given for ending close to start place. So for instance in this example, if the red area was normal and contained dots it would have been easy to make it a loop. (So the user comes back close to where he started)
Finally I suppose it would also be smart to differentiate outdoors areas as you would need to get dressed for outdoors, so you only want to go out once.
Also it could be smart to be able to prioritize ending on a point close to stairwell to next floor if they intend to check multiple floors at once.
Of course would have more more complex and larger plans the this exmaple.
Again sorry for just brainstorming out ideas but I have never done this kind of work and is happy for any pointers :-)
Let N be the set of nodes to visit (purple points). For each i and j in N, let c(i,j) be the distance (or travel time) to get from i to j. These can be pre-computed based on actual distances plus walls, doors, other barriers, etc.
Now, you could then add a penalty to c(i,j) if the path from i to j goes through a door, "annoying" area, etc. But a more flexible way might be as follows:
Let k = 1,...,K be the various types of undesirable route attributes (doors, annoying areas, etc.). Let a_k(i,j) be the amount of each of these attributes on the path from i to j. (For example, suppose k=1 represents door, k=2 represents yellow areas, k=3 represents outside. Then from an i in the break area to j in the bathroom might have a_1(i,j) = 1, and from an i to a j both in the yellow areas would have a_2(i,j) = 0.5 or 2.0 or however annoying that area is, etc.)
Then, let p_k be a penalty for each unit of undesirable attribute k -- maybe p_1 = 0.1 if you don't mind going through doors too much but p_2 = 3.0 if you really don't like yellow areas.
Then, let c'(i,j) = c(i,j) + sum{k=1,...,K} p_k * a_k(i,j). In other words, replace the actual distance with the distance plus penalties for all the annoyances. The user can set the p_k values before the optimization in order to express his/her preferences among these. The final penalties p_k * a_k(i,j) should be commensurate with the distance units used for c(i,j), though -- you don't want distances of 100m but penalties of 1,000,000.
Now solve a TSP with distances given by c'(i,j).
The TSP requires you to start and end at the same node, so that preference is really a constraint. If you're going to solve for multiple floors simultaneously, then the stairway times would be in the c(i,j) so there's no need to explicitly encourage routes that end near a stairway -- the solution would tend to do that anyway since stairs are slow. If you're going to solve each floor independently, then just set the start node for each floor equal to the stairway.
I wouldn't do anything about the red (allowable but unused) areas -- that would already be baked into the c(i,j) calculations.
Hope this helps.
I'm writing an application that has a need to know the speed you're traveling. My application talks to several pieces of equipment, all with different built-in GPS receivers. Where the hardware I'm working with reports speed, I use that parameter. But in some cases, I have hardware which does NOT report speed, simply latitude and longitude.
What I have been doing in that case, is marking the time that I receive the first coordinate, then waiting for another coordinate to come in. I then calculation the distance traveled and divide by the elapsed time.
The problem I'm running into is that some of the hardware reports position quickly (5-10 times per second) while some reports position slowly (0.5 times per second). When I'm receiving the GPS position quickly, my algorithm fails to accurately calculate the speed due to the inherent inaccuracies of GPS receivers. In order words, the position will naturally move due to GPS inaccuracy, and since the elapsed time span from the last received position is so small, my algorithm thinks we've moved far over a short time - meaning we are going fast (when in reality we may be standing still).
How can I go about averaging the speed to avoid this problem? It seems like the process will have to be adaptive based on how fast the points come in. For example if I simply average the last 5 points collected to do my speed calculation, it will probably work great for "fast" reporting units but it will hurt my accuracy for "slow" reporting units.
Any ideas?
Use a simple filter:
Take a position only if it is more than 10 meters away from last taken position.
Then caluclate the distance between lastGood and thisGood, and divide by timeDiff.
Your further want to ignore all speeds under 5km/h were GPS is most noisy.
You further can optimize by calcuklating the direction between last and this, if it stays stable you take it. This helps filtering.
I would average the speed over the last X seconds. Let's pick X=3. For your fast reporters that means averaging your speed with about 20 data points. For your slow reporters, that may only get you 6 data points. This should keep the accuracy fairly even across the board.
I'd try using the average POSITION over the last X seconds.
This should "average out" the random noise associated with the high frequency location input....which should yield a better speed computation.
(Obviously you'd use "averaged" positions to compute your speed)
You probably have an existing data point structure to pull a linq query from?
In light of the note that we need to account for negative vectors, and the suggestion to account for known margins of error here is a more complex example:
class GPS
{
List<GPSData> recentData;
TimeSpan speedCalcZone = new TimeSpan(100000);
decimal acceptableError = .5m;
double CalcAverageSpeed(GPSData newestPoint)
{
var vectors = (from point in recentData
where point.timestamp > DateTime.Now - speedCalcZone
where newestPoint.VectorErrorMargin(point) < acceptableError
select new
{
xVector = newestPoint.XVector(point),
yVector = newestPoint.YVector(point)
});
var averageXVector = (from vector in vectors
select vector.xVector).Average();
var averageYVector = (from vector in vectors
select vector.yVector).Average();
var averagedSpeed = Math.Sqrt(Math.Pow(averageXVector, 2) + Math.Pow(averageYVector, 2));
return averagedSpeed;
}
}
But as pointed out in comments, there is no one magic algorithm, you have to tweak it for your circumstances and needs.
You're looking for one ideal algorithm that may not exist for one very simple reason: you can't invent data where there isn't any and some times you can't even tell where the data ends and error begins.
That being said there are ways to reduce the "noise" as you've discovered with averaging 5 consecutive measurements, I'd add to that you can throw away the "outliers" and choose 3 of the 5 that are closest to each-other.
The question here is what would work best (or acceptably well) for your situation. If you're tracking trucks moving around the continent a few mph won't matter as the errors will cancel themselves out, but if you're tracking a flying drone that moves between buildings the difference can be quite significant.
Here are some more ideas, you can pick and choose how far you can go, I'm assuming the truck scenario and the idea is to get most probable speed when you don't have an accurate reading:
- discard "improbable" speeds - tall buildings can reflect GPS signal causing speeds of over 100mph when you're just walking, having a "highway map" (see below) can help managing the cut-off value
- transmit, store and calculate with error ranges rather than point values (some GPS reports error range).
- keep average error per location
- keep average error per reporting device
- keep average speed per location, you'll end up having a map of highways vs other roads
- you can correlate location speed and direction
I am making a turn based hex-grid game. The player selects units and moves them across the hex grid. Each tile in the grid is of a particular terrain type (eg desert, hills, mountains, etc) and each unit type has different abilities when it comes to moving over the terrain (e.g. some can move over mountains easily, some with difficulty and some not at all).
Each unit has a movement value and each tile takes a certain amount of movement based on its terrain type and the unit type. E.g it costs a tank 1 to move over desert, 4 over swamp and cant move at all over mountains. Where as a flying unit moves over everything at a cost of 1.
The issue I have is that when a unit is selected, I want to highlight an area around it showing where it can move, this means working out all the possible paths through the surrounding hexes, how much movement each path will take and lighting up the tiles based on that information.
I got this working with a recursive function and found it took too long to calculate, I moved the function into a thread so that it didn't block the game but still it takes around 2 seconds for the thread to calculate the moveable area for a unit with a move of 8.
Its over a million recursions which obviously is problematic.
I'm wondering if anyone has an clever ideas on how I can optimize this problem.
Here's the recursive function I'm currently using (its C# btw):
private void CalcMoveGridRecursive(int nCenterIndex, int nMoveRemaining)
{
//List of the 6 tiles adjacent to the center tile
int[] anAdjacentTiles = m_ThreadData.m_aHexData[nCenterIndex].m_anAdjacentTiles;
foreach(int tileIndex in anAdjacentTiles)
{
//make sure this adjacent tile exists
if(tileIndex == -1)
continue;
//How much would it cost the unit to move onto this adjacent tile
int nMoveCost = m_ThreadData.m_anTerrainMoveCost[(int)m_ThreadData.m_aHexData[tileIndex].m_eTileType];
if(nMoveCost != -1 && nMoveCost <= nMoveRemaining)
{
//Make sure the adjacent tile isnt already in our list.
if(!m_ThreadData.m_lPassableTiles.Contains(tileIndex))
m_ThreadData.m_lPassableTiles.Add(tileIndex);
//Now check the 6 tiles surrounding the adjacent tile we just checked (it becomes the new center).
CalcMoveGridRecursive(tileIndex, nMoveRemaining - nMoveCost);
}
}
}
At the end of the recursion, m_lPassableTiles contains a list of the indexes of all the tiles that the unit can possibly reach and they are made to glow.
This all works, it just takes too long. Does anyone know a better approach to this?
As you know, with recursive functions you want to make the problem as simple as possible. This still looks like it's trying to bite off too much at once. A couple thoughts:
Try using a HashSet structure to store m_lPassableTiles? You could avoid that Contains condition this way, which is generally an expensive operation.
I haven't tested the logic of this in my head too thoroughly, but could you set a base case before the foreach loop? Namely, that nMoveRemaining == 0?
Without knowing how your program is designed internally, I would expect m_anAdjacentTiles to contain only existing tiles anyway, so you could eliminate that check (tileIndex == -1). Not a huge performance boost, but makes your code simpler.
By the way, I think games which do this, like Civilization V, only calculate movement costs as the user suggests intention to move the unit to a certain spot. In other words, you choose a tile, and it shows how many moves it will take. This is a much more efficient operation.
Of course, when you move a unit, surrounding land is revealed -- but I think it only reveals land as far as the unit can move in one "turn," then more is revealed as it moves. If you choose to move several turns into unknown territory, you better watch it carefully or take it one turn at a time. :)
(Later...)
... wait, a million recursions? Yeah, I suppose that's the right math: 6^8 (8 being the movements available) -- but is your grid really that large? 1000x1000? How many tiles away can that unit actually traverse? Maybe 4 or 5 on average in any given direction, assuming different terrain types?
Correct me if I'm wrong (as I don't know your underlying design), but I think there's some overlap going on... major overlap. It's checking adjacent tiles of adjacent tiles already checked. I think the only thing saving you from infinite recursion is checking the moves remaining.
When a tile is added to m_lPassableTiles, remove it from any list of adjacent tiles received into your function. You're kind of doing something similar in your line with Contains... what if you annexed that if statement to include your recursive call? That should cut your recursive calls down from a million+ to... thousands at most, I imagine.
Thanks for the input everyone. I solved this by replacing the Recursive function with Dijkstra's Algorithm and it works perfectly.
as my personal project i develop a game to which users can join at any time.
I have a tiled worldmap that is created from a simple Bitmap which has resources at random positions all over the map except for oceans.
When a player joins i want to create his starting position at a place that has at least 1 tile of each of the 4 resources in range (circle with a still to decide diameter, i think about 3-4 tiles) but no ocean tiles (Tile.Type != "ocean") and not conflicting with a field belonging to another player (Tile.Owner == null).
The map size can vary, currently it's 600x450 and it's implemented as a simple Array: Tile[][] with Tile.Resource being either null or having Tile.Resource.Type as a string of the resource name (as it's configurable by plaintext files to fit any scenery i want to put it in, so no built-in enums possible).
I currently have a loop that simple goes through every possible position, checks every field in range and counts the number of each resource field and discards it if there are none for one of them or if one of them belongs to a player or is an ocean field.
I would prefer if it finds a random position but thats not a requirement, mono-compatibility however is a requirement.
What would be the best way to implement an algorithm for that in C#?
Edit
The Area of players can and will increase/change and resources can be used up and may even appear randomly (=> "Your prospectors found a new goldmine") so pre-calculated positions will propably not work.
Instead of looping through all your positions, why don't you loop through all your resources? Your resources are likely to be more scant. Then pick one of the sets of resources that meet your clustering criterion.
You might consider simulated annealing ... it's not very complex to implement. You have a set of criteria with certain weight, and randomly "shake" the position at a certain "temperature" (the higher the temp, the greater the radius the position may randomly move within, from it's previous position), then when it "cools" you measure the value of the position based on the total weights and subtract negative things, like spawning too close to where they died, or next to other players, etc..., if the value is not within a certain range, you decrease the temperature, but "shake" the positions again, cool down, check weights and overall value, repeat until you get an acceptable solution.
Simulated annealing is used in map making, to label cities and features with maximum clarity, while staying within range and minimizing overlap. Since it's a heuristic approach there is no guarantee that there will be an optimal solution, so you keep "lowering the temp" and eventually just choose the best result.
Let's suppose that once your map is created you don't have to create a new one often.
Just add the following to each Tile and calculate them once after your map was generated:
-int NrOceanTiles
-int NrResourceA
-int ...
Now when you want to get a tile you can do it quite a bit faster:
IEnumerable<Tiles> goodTiles = tiles.Where(tile => tile.NrResourceA >= 1 && tile.NrResourceB >= 2);
Tile goodTile = goodTiles.ElementAt(randomI);
Predefined data would still be the best way forward.
As modifying the map size, and adding/losing resources would not happen as often, just update this data table when they do happen. Perhaps you could do the map/resource changes once per day, and have everything done in a daily database update.
In this way, finding a valid location would be far faster than any algorithm you implement to search all the tiles around it.
If the game isn't going to be designed for a huge number of players, most games implement "start spots" on the map. You'd hand-pick them and record the positions in your map somehow, probably similar to how you're implementing the map resources (i.e., on that spot, there exists an item you can pick up, but on top of the tile map).
Since the resources spawn at random, you could either not spawn resources on the start spots (which could be visible or not), or simply not spawn a player at a start spot on which there is a resource (or look within a 9-cell box to find a close alternate location).
Certainly you would want to hold the set of possible starting locations and update it as resources are created and consumed.
It seems like your best bet is to calculate open locations at map generation. Have your start location calculation function optionally take grid location and size or rectangle corners.
Have a list for Free locations and Occupied locations. Player occupies territory? Move resources in range to the Occupied list. Player gets crushed mercilessly? Move resources in range to the Free list. Resource eliminated? Delete any locations that used it in your Open/Occupied lists. Resource added? Recalculate using your effect radius to determine effected area. When your map area expands, just run the initial calculations on the new section of your grid + effect radius and add the new locations.
Then you just have to set the events up and pick a random Free value when someone joins.