Splitting up A* pathing of many units into seperate game frames - c#

So my issues is that, for large groups of units, attempting to pathfind for all of them in the same frame is causing a pretty noticeable slow down. When pathing for 1 or 2 units the slow down is generally not noticeable but for many more than that, depending on the complexity of the path, it can get very slow.
While my A* could probably afford a bit of a tune up, I also know that another way to speed up the pathing is to just divy up the pathfinding over multiple game frames. Whats a good method to accomplish this?
I apologize if this is an obvious or easily searched question, I couldn't really think of how to put it into a searchable string of words.
More info: This is A* on a rectilinear grid, and programmed using C# and the XNA framework. I plan on having potentially up to 50-75 units in need of pathing.
Thanks.

Scalability
There's several ways of optimizing for this situation. For one, you may not have to split up into multiple game frames. To some extent it seems scalability is the issue. 100 units is at least 100 times more expensive than 1 unit.
So, how can we make pathing more optimized for scalability? Well, that does depend on your game design. I'm going to (perhaps wrongly) assume a typical RTS scenario. Several groups of units, with each group being relatively close in proximity. The pathing solution for many units in close proximity will be rather similar. The units could request pathing from some kind of pathing solver. This pathing solver could keep a table of recent pathing requests and their solutions and avoid calculating the same output from the same input multiple times. This is called memoization.
Another addition to this could involve making a hierarchy out of your grid or graph. Solve on the simpler graph first, then switch to a more detailed graph. Multiple units could use the same low-resolution path, taking advantage of memoization, but each calculate their own high-resolution path individually if the high-resolution paths are too numerous to reasonably memoize.
Multi-Frame Calculations
As for trying to split the calculations among frames, there are a few approaches I can think of off hand.
If you want to take the multi-threaded route, you could use a worker-thread-pooling model. Each time a unit requests a path, it is queued for a solution. When a worker-thread is free, it is assigned a task to solve. When the thread solves the task, you could either have a callback to inform the unit or you could have the unit query if the task is complete in some manner, most likely queried each frame.
If there are no dynamic obstacles or they are handled separately, you can have a constant state that the path solver uses. If not, then there will be a non-negligible amount of complexity and perhaps even overheard with having these threads lock mutable game state information. Paths could be rendered invalid from one frame to the next and require re-validation each frame. Multi-threading may end up being a pointless extra-overhead where, due to locking and synchronization, threads rarely run parallel. It's just a possibility.
Alternatively, you could design your path finding algorithms to run in discrete steps. After n number of steps, check the amount of time elapsed since the start of the algorithm. If it exceeds a certain amount of time, the pathing algorithm saves its progress and returns. The calling code could then check if the algorithm completed or not. On the next frame, resume the pathing algorithm from where it was. Repeat until solved.
Even with the single-threaded, voluntary approach to solving paths, if changes in game state affect the validity of a paths from frame to frame, you're going to run into having to re-validate current solutions on a frame to frame basis.
Use Partial Solutions
With either of the above approaches, you could run into the issue of units commanded to go somewhere idling for multiple frames before having a complete pathing solution. This may be acceptable and practically undetectable under typical circumstances. If it isn't, you could attempt to use the incomplete solution as is. If each incomplete solution differs too greatly, units will behave rather indecisively however. In practice, this "indecisiveness" may also not happen often enough to cause concern.

If your units are all pathing to the same destination, this answer may be applicable, otherwise, it'll just be food for thought.
I use a breadth-first distance algorithm to path units. Start at your destination and mark the distance from it as 0. Any adjacent cells are 1, cells adjacent to those are 2, etc. Do not path through obstacles, and path the entire board. Usually O(A) time complexity where A is the boards area.
Then, whenever you want to determine which direction a unit needs to go, you simply find the square with the minimal distance to the destination. O(1) time complexity.
I'll use this pathing algorithm for tower defense games quite often because its time complexity is entirely dependent on the size of the board (usually fairly small in TD games) rather than the number of units (usually fairly large). It allows the player to define their own path (a nice feature), and I only need to run it once a round because of the nature of the game.

Related

Fast way of Identifying Larger amounts of OverlapCircle Sensor Data

I'm still a beginner and haven't got much experience in performance optimization and so on and i have the following scenario in a little simulation attempt.
Agents have 3 overlapCircle sensors which each scan 2-3 different layers which in some cases contain multiple entities, further categorized via multiple tags each (Food,Agent Type, Faction, etc.).
Im already filtering my LayerMasks depending on the state of the agent but then still have to decide which colliders (could be up to about 50-100 colliders at a time) i count and care about, depending on the tags of the gameObjects.
The sensors are already running in a coroutine at 500ms to save time (quicker would be better) but i'm sure with all the loops through nested dictionaries i'm going to run into issues quickly with more than a hundred or so agents.
As an example, im looping through this kind of collection multiple times for detection and processing and it makes me uncomfortable already and wonder if that is sustainable -_- :
//That would be: SensorID, Layer and the Colliders....
Dictionary<int, Dictionary<string, List<Collider2D>>> processedData;
I'm guessing so many colliders aren't a good thing to begin with? But is there another way?
Should i pre-filter with more different Layers? In my brain, interactive agents are all on the same layer, no matter what faction or whatever..
Is there a better way to categorize entities than my multiple tags custom script, since this still requires a GetComponent call in a loop -_-
Should i have the whole code to process the sensor data in a coroutine instead of update?
Or is there nothing to worry about unless i do that with tens of thousands of Agents?
No need for code but I would be grateful for some input on what would be fast and proper ways of doing these things.
Thanks ;)
No need for code but I would be grateful for some input on what would be fast and proper ways of doing these things.
Thanks ;)

algorithm to use to find closest object on tile map

I have a game map represented as a tile map. Currently there are two types of objects that are present on the map, relevant to this problem: gatherable resources (trees, rocks, etc.) and buildings built by the player. Buildings are also connected by roads.
I have a problem figuring out an efficient algorithm that could do the following:
find the closest resource to any relevant building (ie. find the closest tree to lumberjack/tree-gatherer)
find the closest relevant building to any building (ie. find the closest storage to any sawmill)
I separated those two issues because the first one does not need roads, but the second one is supposed to only use roads.
So, the result of this should be a single path to a single object, that is the closest to the one I'm figuring it out from. The path is then used by a worker to gather the resource and bring it back, or let's say, to pick a resource from a sawmill and bring it to the closest storage.
I know how to get the closest path itself (A*, Djikstra or even Floyd-Warshall), but I'm not sure how to optimally proceed with multiples of those and getting the best/closest one, especially if it's going to be run very regularly and the map object collections (roads and buildings) is expected to be changing regularly as well.
I'm doing this in Unity3D/c# but I guess this is not really Unity3D-related issue.
How should I proceed?
Finding the geographical distance between two objects is a cheap (quick) operation - you can afford to perform that many times per game tick. Use it if the option is available.
Finding the shortest path by making use of terrain features such as roads, tracks etc. is a much more complex operation. As you already mentioned in your post the A* search algorithm is probably your best option for it, but it is quite slow.
But generally, you should not need to run it too often - just compute the path every X seconds (for some value of X), and make your worker spend the next few game ticks following this computed path, until you "refresh" it. The more precision you have, and more responsiveness to changes to the game environment (e.g. obstacles appearing in your path), the more CPU time you will use.
Try different amounts of precision, and find one that gives decent precision while not being too expensive in terms of CPU time. (The update interval depends purely on the number of calls you are expected to make. Calculating paths for 100 workers is obviously much harder than for 1.)

Unity3D large maps advice - How big is too big?

Using the Unity3D engine. I'm making a multiplayer game for fun, using Unity's standard networking. If servers hold 25-50 players, what map size is recommended? How big can I make a very detailed map before it is too big for effective gameplay? How do I optimize a large map? Any ideas and advice would be great, I just want to learn and could not find anything about this on google :D
*My map is sliced into different parts.
The size of the map itself, in units, doesn't matter for performance at all. Just keep in mind that Unity (as any other game engine) uses floats for geometry, and when the float values get too high or too low, things can get funny.
What matters is the amount of data that your logic, networking and rendering engine have to churn through. These are different things, even logic/networking data, and limits on those greatly depend on architecture of your game.
Lets' talk about networking. There, two parameters are critical as your limits: bandwidth and latency. Bandwidth is how much data can you transfer, and latency is how fast. Ok, this explanation is confusing. Imagine a truck full of HDDs travelling from one city to another: it has gigantic bandwidth, and you can transfer entire data centers this way. But the latency, time for the signal to travel, is a few hours. On the other hand, two different people from these cities can hop on air balloons, look at each other in the night sky and turn their flashlights on and off. This way, they'll be able to exchange just one bit of information, but with the lowest possible latency: you can't get faster than light.
It also depends on how your networking works. RTS games, for example, often use lock-step multiplayer architecture can operate on thousands of units, but will only exchange a limited amount of data between users: their input commands. A first-person shooter, on the other hand, heavily relies on latency (which lock-stepping can damage): 10 ms when you jump and fire a rocket launcher are much more important than when you say your troops to attack. So, the networking logic is organised differently: every player's computer predicts what will happen, but the central server has authority on what actually happened. Of course, what I'm writing right now are just general examples of architectures that can be used; choosing the right way to do the networking is very difficult, but very interesting and creative task.
Now, logic itself. Actually, most of the gameplay logic used in modern games is relatively simple in terms of cpu requirements, unless it's physics or AI. Using physics in a multiplayer game is tricky enough on it's own, because of synchronisation problems (remember floats?); usually, the actual logic that can influence who wins and who loses is pretty simplified: level geometry is completely static, characters move using easy logic without real physical force, and the physics is usually limited just to collision detection. Of course, you see a lot of physical-based visual stuff: ragdolls of killed enemies falling down, rubble from explosion flying up; but these are typically de-synchronised between different computers and can't actually affect the gameplay itself.
And finally, rendering. Here, a lot of different constraints make place. To tell about them all, I would have to describe the whole rendering pipeline of Unity on different devices, and this is clearly out of scope of this question. Thankfully, there's another way! Instead of reasoning about this limit theoretically, just to a practical prototype. Put in different game assets in the scene, run it on target device and look how it performs. Adjust, repeat! These game assets can be completely ugly or irrelevant; however, they have to have the same technical properties as what you're going to use in the real game: number of polygons, sizes of textures, shaders, etc, etc. Let's say, for example, that you want to create a COD-like multiplayer shooter. To come up with your rendering requirements, just put in N environment models with N polygons each, using NxN textures, put in N characters with some skeleton animations with N bones, and also don't forget some fake logic that would emulate CPU-intenstive stuff so your perfomance measuring will be more realistic. Of course, it won't give you a final picture, but it'll be a good way to start, and it's great to do that before you start producing a lot of art assets.
Overall, game perfomance optimisation is a very broad and interesting theme, and it's impossible to give a precise answer to such a question.
You can improved this reducing the clipping plane of your camera to reduce the visible render distance, and too can use LOD improvement making your sliced part with minor details.
Check this link for more detail about LOD:
http://docs.unity3d.com/Manual/class-LODGroup.html
If you need more improvement you can make a script to load terrain in runtime based on a distance arround your player.
First and foremost: Make the gameplay work, optimize it later. Premature optimization is a waste of programmer's time.
Secondly: think of Skyrim and Minecraft. The world is separated into pieces that are loaded in background when you move around. Using that approach (chunking your world into pieces) you can have virtually infinite world size.

Preventing entities from stacking on top of eachother in an overhead shooter

I'm working on an overhead shooter and what happens is, over time, as I move in circles around the arena, the enemies will begin to stack on top of each other until they're one giant stack of units. It ends up looking pretty silly.
The AI is pretty simple and basic: Find the player, move towards him, and attack him if he's in range.
What's the best way to push them away from each other so that they don't all end up on the same spot? I think flocking is a bit overkill (and probably too intensive since I'll have 100-200 enemies on the screen at a time).
Ideas?
Thanks!
Here are a few different approaches you could take to solving this problem:
You could define a potential field for each unit that associates a "height" or "badness" to each location on the map. Each unit moves in a way that tries to minimize its potential, perhaps by taking a step in the direction that moves it to the lowest potential that it can in one step. You could define the potential function so that it slopes toward the player, causing all units to try to move to the player, but also be very high around existing units, causing units to avoid bumping into one another. This is a very powerful framework that is exploited all the time in AI; one famous example is its use in the Berkeley Overmind AI for StarCraft, which ended up winning an AI StarCraft competition. If you do adopt this sort of approach, you could probably then tweak the potential function to get the AI to behave in many other interesting ways, and could easily support flocking. I personally think that this is the best approach to take, as it's the most flexible. It also would be a great starting point for more advanced pathfinding models. For a very good and practical introduction to potential fields for AI, check out this website. For a rigorous mathematical introduction to potential fields and their applications, you might want to check out this paper surveying different AI methods using potential fields.
If you define a bounding circle for each enemy, you could just explicitly disallow the units from stacking on top of each other by preventing any two units from being within two radii's distance of one another. Any time two units got too close, you could either stop one of them from moving, or could have them exert forces on one another to spread them apart. When two units bump into each other, you could just pick a random force vector to apply to each unit to try to spread them apart. This is a much hackier and less elegant solution than potential fields, but if you need to get something up and running it's definitely a viable option.
You could choose a set of points around the player that the units try to move toward, then have each unit randomly choose one of those target points to move to. This would cause the units to spread more thinly in a ring (or whatever shape you'd like) around the player, avoiding the huge masses that you've seen so far. Again, this is way less elegant than using potential fields, but it's another quick hack you could experiment with if your goal is to get something working quickly.
Hope this helps!

factory floor simulation

I would like to create a simulation of a factory floor, and I am looking for ideas on how to do this. My thoughts so far are:
• A factory is a made up of a bunch of processes, some of these processes are in series and some are in parallel. Each process would communicate with it's upstream and downstream and parallel neighbors to let them know of it’s through put
• Each process would it's own basic attributes like maximum throughput, cost of maintenance as a result of through put
Obviously I have not fully thought this out, but I was hoping somebody might be able to give me a few ideas or perhaps a link to an on line resource
update:
This project is only for my own entertainment, and perhaps learn a little bit alnong the way. I am not employed as a programmer, programming is just a hobby for me. I have decided to write it in C#.
Simulating an entire factory accurately is a big job.
Firstly you need to figure out: why are you making the simulation? Who is it for? What value will it give them? What parts of the simulation are interesting? How accurate does it need to be? What parts of the process don't need to be simulated accurately?
To figure out the answers to these questions, you will need to talk to whoever it is that wants the simulation written.
Once you have figured out what to simulate, then you need to figure out how to simulate it. You need some models and some parameters for those models. You can maybe get some actual figures from real production and try to derive models from the figures. The models could be a simple linear relationship between an input and an output, a more complex relationship, and perhaps even a stochastic (random) effect. If you don't have access to real data, then you'll have to make guesses in your model, but this will never be as good so try to get real data wherever possible.
You might also want to consider to probabilities of components breaking down, and what affect that might have. What about the workers going on strike? Unavailability of raw materials? Wear and tear on the machinery causing progressively lower output over time? Again you might not want to consider these details, it depends on what the customer wants.
If your simulation involves random events, you might want to run it many times and get an average outcome, for example using a Monte Carlo simulation.
To give a better answer, we need to know more about what you need to simulate and what you want to achieve.
Since your customer is yourself, you'll need to decide the answer to all of the questions that Mark Byers asked. However, I'll give you some suggestions and hopefully they'll give you a start.
Let's assume your factory takes a few different parts and assembles them into just one finished product. A flowchart of the assembly process might look like this:
Factory Flowchart http://img62.imageshack.us/img62/863/factoryflowchart.jpg
For the first diamond, where widgets A and B are assembled, assume it takes on average 30 seconds to complete this step. We'll assume the actual time it takes the two widgets to be assembled is distributed normally, with mean 30 s and variance 5 s. For the second diamond, assume it also takes on average 30 seconds, but most of the time it doesn't take nearly that long, and other times it takes a lot longer. This is well approximated by an exponential distribution, with 30 s as the rate parameter, often represented in equations by a lambda.
For the first process, compute the time to assemble widgets A and B as:
timeA = randn(mean, sqrt(variance)); // Assuming C# has a function for a normally
// distributed random number with mean and
// sigma as inputs
For the second process, compute the time to add widget C to the assembly as:
timeB = rand()/lambda; // Assuming C# has a function for a uniformly distributed
// random number
Now your total assembly time for each iGadget will be timeA + timeB + waitingTime. At each assembly point, store a queue of widgets waiting to be assembled. If the second assembly point is a bottleneck, it's queue will fill up. You can enforce a maximum size for its queue, and hold things further up stream when that max size is reached. If an item is in a queue, it's assembly time is increased by all of the iGadgets ahead of it in the assembly line. I'll leave it up to you to figure out how to code that up, and you can run lots of trials to see what the total assembly time will be, on average. What does the resultant distribution look like?
Ways to "spice this up":
Require 3 B widgets for every A widget. Play around with inventory. Replenish inventory at random intervals.
Add a quality assurance check (exponential distribution is good to use here), and reject some of the finished iGadgets. I suggest using a low rejection rate.
Try using different probability distributions than those I've suggested. See how they affect your simulation. Always try to figure out how the input parameters to the probability distributions would map into real world values.
You can do a lot with this simple simulation. The next step would be to generalize your code so that you can have an arbitrary number of widgets and assembly steps. This is not quite so easy. There is an entire field of applied math called operations research that is dedicated to this type of simulation and analysis.
What you're describing is a classical problem addressed by discrete event simulation. A variety of both general purpose and special purpose simulation languages have been developed to model these kinds of problems. While I wouldn't recommend programming anything from scratch for a "real" problem, it may be a good exercise to write your own code for a small queueing problem so you can understand event scheduling, random number generation, keeping track of calendars, etc. Once you've done that, a general purpose simulation language will do all that stuff for you so you can concentrate on the big picture.
A good reference is Law & Kelton. ARENA is a standard package. It is widely used and, IMHO, is very comprehensive for these kind of simulations. The ARENA book is also a decent book on simulation and it comes with the software that can be applied to small problems. To model bigger problems, you'll need to get a license. You should be able to download a trial version of ARENA here.
It maybe more then what you are looking for but visual components is a good industrial simulation tool.
To be clear I do not work for them nor does the company I work for currently use them, but we have looked at them.
Automod is the way to go.
http://www.appliedmaterials.com/products/automod_2.html
There is a lot to learn, and it won't be cheap.
ASI's Automod has been in the factory simulation business for about 30 years. It is now owned by Applied Materials. The big players who work with material handling in a warehouse use Automod because it is the proven leader.

Categories

Resources