Fast way of Identifying Larger amounts of OverlapCircle Sensor Data - c#

I'm still a beginner and haven't got much experience in performance optimization and so on and i have the following scenario in a little simulation attempt.
Agents have 3 overlapCircle sensors which each scan 2-3 different layers which in some cases contain multiple entities, further categorized via multiple tags each (Food,Agent Type, Faction, etc.).
Im already filtering my LayerMasks depending on the state of the agent but then still have to decide which colliders (could be up to about 50-100 colliders at a time) i count and care about, depending on the tags of the gameObjects.
The sensors are already running in a coroutine at 500ms to save time (quicker would be better) but i'm sure with all the loops through nested dictionaries i'm going to run into issues quickly with more than a hundred or so agents.
As an example, im looping through this kind of collection multiple times for detection and processing and it makes me uncomfortable already and wonder if that is sustainable -_- :
//That would be: SensorID, Layer and the Colliders....
Dictionary<int, Dictionary<string, List<Collider2D>>> processedData;
I'm guessing so many colliders aren't a good thing to begin with? But is there another way?
Should i pre-filter with more different Layers? In my brain, interactive agents are all on the same layer, no matter what faction or whatever..
Is there a better way to categorize entities than my multiple tags custom script, since this still requires a GetComponent call in a loop -_-
Should i have the whole code to process the sensor data in a coroutine instead of update?
Or is there nothing to worry about unless i do that with tens of thousands of Agents?
No need for code but I would be grateful for some input on what would be fast and proper ways of doing these things.
Thanks ;)
No need for code but I would be grateful for some input on what would be fast and proper ways of doing these things.
Thanks ;)

Related

Performance of FindGameObjectWithTag(Tag tag) Versus Using Public Variables for GameObjects in Unity

Basically, I am trying to optimize a game for Unity for mobile devices. Therefore, it is imperative to contain CPU usage. As this is a complex game, many of the scripts reference each other (and their GameObjects). Currently, I am using GameObject.FindGameObjectWithTag(Tag tag) to reference other GameObjects, Components, and Scripts. I am also aware that this can be done with public variables by using the drag-and-drop in the editor. But as I know which GameObject will be dropped into each level, I found the first option to be simpler to use as the drag-and-drop led to errors many times and was tedious to use. However, that will not be a problem, and I feel performance of one or the other outweighs these fall backs. I was wondering if there was a difference in terms of performance for these two approaches, and which one would be better suited for a high performance, mobile, aspect.
FindGameObjectWithTag() has a complexity O(n) in the worst case scenario. You will not have performance issue if you have small amount of objects in your scene and small amount of objects searching for objects with tags.
Another approach is by serializing these objects in the script and using them. With this approach you decrease CPU usage and increase memory usage since no meta objects exist in the scene or not you allocate memory for them.
I suggest a third approach if it is possible for you. Search for Singleton pattern on Google and YouTube. It allows you to have a reference to that object if it's 1 all the time, without every time trying to find it with FindGameObjectsWithTag()... resulting in very small CPU and memory usage.
In my experience your best bet is to create a public variable, if it isn't assigned when you need it, use your method of FindGameObjectWithTag or however you finding it before, store it in that public variable so you only have to do the lookup once. As long as you don't do the FindGameObjectWithTag every frame the lookup hit shouldn't be too bad.

Unity3D large maps advice - How big is too big?

Using the Unity3D engine. I'm making a multiplayer game for fun, using Unity's standard networking. If servers hold 25-50 players, what map size is recommended? How big can I make a very detailed map before it is too big for effective gameplay? How do I optimize a large map? Any ideas and advice would be great, I just want to learn and could not find anything about this on google :D
*My map is sliced into different parts.
The size of the map itself, in units, doesn't matter for performance at all. Just keep in mind that Unity (as any other game engine) uses floats for geometry, and when the float values get too high or too low, things can get funny.
What matters is the amount of data that your logic, networking and rendering engine have to churn through. These are different things, even logic/networking data, and limits on those greatly depend on architecture of your game.
Lets' talk about networking. There, two parameters are critical as your limits: bandwidth and latency. Bandwidth is how much data can you transfer, and latency is how fast. Ok, this explanation is confusing. Imagine a truck full of HDDs travelling from one city to another: it has gigantic bandwidth, and you can transfer entire data centers this way. But the latency, time for the signal to travel, is a few hours. On the other hand, two different people from these cities can hop on air balloons, look at each other in the night sky and turn their flashlights on and off. This way, they'll be able to exchange just one bit of information, but with the lowest possible latency: you can't get faster than light.
It also depends on how your networking works. RTS games, for example, often use lock-step multiplayer architecture can operate on thousands of units, but will only exchange a limited amount of data between users: their input commands. A first-person shooter, on the other hand, heavily relies on latency (which lock-stepping can damage): 10 ms when you jump and fire a rocket launcher are much more important than when you say your troops to attack. So, the networking logic is organised differently: every player's computer predicts what will happen, but the central server has authority on what actually happened. Of course, what I'm writing right now are just general examples of architectures that can be used; choosing the right way to do the networking is very difficult, but very interesting and creative task.
Now, logic itself. Actually, most of the gameplay logic used in modern games is relatively simple in terms of cpu requirements, unless it's physics or AI. Using physics in a multiplayer game is tricky enough on it's own, because of synchronisation problems (remember floats?); usually, the actual logic that can influence who wins and who loses is pretty simplified: level geometry is completely static, characters move using easy logic without real physical force, and the physics is usually limited just to collision detection. Of course, you see a lot of physical-based visual stuff: ragdolls of killed enemies falling down, rubble from explosion flying up; but these are typically de-synchronised between different computers and can't actually affect the gameplay itself.
And finally, rendering. Here, a lot of different constraints make place. To tell about them all, I would have to describe the whole rendering pipeline of Unity on different devices, and this is clearly out of scope of this question. Thankfully, there's another way! Instead of reasoning about this limit theoretically, just to a practical prototype. Put in different game assets in the scene, run it on target device and look how it performs. Adjust, repeat! These game assets can be completely ugly or irrelevant; however, they have to have the same technical properties as what you're going to use in the real game: number of polygons, sizes of textures, shaders, etc, etc. Let's say, for example, that you want to create a COD-like multiplayer shooter. To come up with your rendering requirements, just put in N environment models with N polygons each, using NxN textures, put in N characters with some skeleton animations with N bones, and also don't forget some fake logic that would emulate CPU-intenstive stuff so your perfomance measuring will be more realistic. Of course, it won't give you a final picture, but it'll be a good way to start, and it's great to do that before you start producing a lot of art assets.
Overall, game perfomance optimisation is a very broad and interesting theme, and it's impossible to give a precise answer to such a question.
You can improved this reducing the clipping plane of your camera to reduce the visible render distance, and too can use LOD improvement making your sliced part with minor details.
Check this link for more detail about LOD:
http://docs.unity3d.com/Manual/class-LODGroup.html
If you need more improvement you can make a script to load terrain in runtime based on a distance arround your player.
First and foremost: Make the gameplay work, optimize it later. Premature optimization is a waste of programmer's time.
Secondly: think of Skyrim and Minecraft. The world is separated into pieces that are loaded in background when you move around. Using that approach (chunking your world into pieces) you can have virtually infinite world size.

Splitting up A* pathing of many units into seperate game frames

So my issues is that, for large groups of units, attempting to pathfind for all of them in the same frame is causing a pretty noticeable slow down. When pathing for 1 or 2 units the slow down is generally not noticeable but for many more than that, depending on the complexity of the path, it can get very slow.
While my A* could probably afford a bit of a tune up, I also know that another way to speed up the pathing is to just divy up the pathfinding over multiple game frames. Whats a good method to accomplish this?
I apologize if this is an obvious or easily searched question, I couldn't really think of how to put it into a searchable string of words.
More info: This is A* on a rectilinear grid, and programmed using C# and the XNA framework. I plan on having potentially up to 50-75 units in need of pathing.
Thanks.
Scalability
There's several ways of optimizing for this situation. For one, you may not have to split up into multiple game frames. To some extent it seems scalability is the issue. 100 units is at least 100 times more expensive than 1 unit.
So, how can we make pathing more optimized for scalability? Well, that does depend on your game design. I'm going to (perhaps wrongly) assume a typical RTS scenario. Several groups of units, with each group being relatively close in proximity. The pathing solution for many units in close proximity will be rather similar. The units could request pathing from some kind of pathing solver. This pathing solver could keep a table of recent pathing requests and their solutions and avoid calculating the same output from the same input multiple times. This is called memoization.
Another addition to this could involve making a hierarchy out of your grid or graph. Solve on the simpler graph first, then switch to a more detailed graph. Multiple units could use the same low-resolution path, taking advantage of memoization, but each calculate their own high-resolution path individually if the high-resolution paths are too numerous to reasonably memoize.
Multi-Frame Calculations
As for trying to split the calculations among frames, there are a few approaches I can think of off hand.
If you want to take the multi-threaded route, you could use a worker-thread-pooling model. Each time a unit requests a path, it is queued for a solution. When a worker-thread is free, it is assigned a task to solve. When the thread solves the task, you could either have a callback to inform the unit or you could have the unit query if the task is complete in some manner, most likely queried each frame.
If there are no dynamic obstacles or they are handled separately, you can have a constant state that the path solver uses. If not, then there will be a non-negligible amount of complexity and perhaps even overheard with having these threads lock mutable game state information. Paths could be rendered invalid from one frame to the next and require re-validation each frame. Multi-threading may end up being a pointless extra-overhead where, due to locking and synchronization, threads rarely run parallel. It's just a possibility.
Alternatively, you could design your path finding algorithms to run in discrete steps. After n number of steps, check the amount of time elapsed since the start of the algorithm. If it exceeds a certain amount of time, the pathing algorithm saves its progress and returns. The calling code could then check if the algorithm completed or not. On the next frame, resume the pathing algorithm from where it was. Repeat until solved.
Even with the single-threaded, voluntary approach to solving paths, if changes in game state affect the validity of a paths from frame to frame, you're going to run into having to re-validate current solutions on a frame to frame basis.
Use Partial Solutions
With either of the above approaches, you could run into the issue of units commanded to go somewhere idling for multiple frames before having a complete pathing solution. This may be acceptable and practically undetectable under typical circumstances. If it isn't, you could attempt to use the incomplete solution as is. If each incomplete solution differs too greatly, units will behave rather indecisively however. In practice, this "indecisiveness" may also not happen often enough to cause concern.
If your units are all pathing to the same destination, this answer may be applicable, otherwise, it'll just be food for thought.
I use a breadth-first distance algorithm to path units. Start at your destination and mark the distance from it as 0. Any adjacent cells are 1, cells adjacent to those are 2, etc. Do not path through obstacles, and path the entire board. Usually O(A) time complexity where A is the boards area.
Then, whenever you want to determine which direction a unit needs to go, you simply find the square with the minimal distance to the destination. O(1) time complexity.
I'll use this pathing algorithm for tower defense games quite often because its time complexity is entirely dependent on the size of the board (usually fairly small in TD games) rather than the number of units (usually fairly large). It allows the player to define their own path (a nice feature), and I only need to run it once a round because of the nature of the game.

Multiplayer Game Synchronization

I've a server/client architecture implemented, where all state changes are sent to the function, validated and broadcasted to all clients connected. This works rather well, but the system does not maintain synchronization between the client instances of the game as of now.
If there happened to be a 5 second lag between the server and a particular client then he would receive the state change 5 seconds after the rest of the clients thus leaving him with game state out of sync. I've been searching for various ways to implement a synchronization system between the clients but haven't found much so far.
I'm new to network programming, and not so naive to think that I can invent a working system myself without dedicating a severe amount of time to it. The ideas I've been having, however, is to keep some kind of time system, so each state change would be connected to a specific timestamp in the game. That way when a client received a state change, it would know exactly in which period of the game the changed happened, and would in turn be able to correlate for the lag. The problem with this method is that in those n seconds lag the game would have had continued on the client side, and thus the client would have to rollback in time to update for the state change which definitely would get messy.
So I'm looking for papers discussion the subjects or algorithms that solves it. Perhaps my whole design of how the multiplayer system works is flawed, in the sense that a client's game instance shouldn't update unless notion is received from the server? Right now the clients just update themselves in their game loop assuming that any states haven't changed.
The basic approach to this is something called Dead Reckoning and a quite nice article about it can be found here. Basically it is a predication algorithm for where entities positions will be guessed at for the times between server updates.
There are more advanced methodologies that build on this concept, but it is a good starting point.
Also a description of how this is handled in the source engine (Valve's engine for the first Half Life game) can be found here, the principle is basically the same - until the server tells you otherwise use a prediction algorithm to move the entity along an expected path - but this article handles the effect this has on trying to shoot something in more depth.
The best resources I've found in this area are these two articles from Valve Software:
Latency Compensating Methods in Client/Server In-game Protocol Design and Optimization
Source Multiplayer Networking
There will never be a way to guarantee perfect synchronisation across multiple viewpoints in real time - the laws of physics make it impossible. If the sun exploded now, how could you guarantee that observers on Alpha Centauri see the supernova at the same time as we would on Earth? Information takes time to travel.
Therefore, your choices are to either model everything accurately with latency that may differ from viewer to viewer (which is what you have currently), or model them inaccurately without latency and broadly synchronised across viewers (which is where prediction/dead reckoning/extrapolation come in). Slower games like real time strategy tends to go the first route, faster games go the second route.
In particular, you should never assume that the time it takes to travel will be constant. This means that merely sending start and stop messages to move entities will never suffice under either model. You need to send periodic updates of the actual state (typically several times a second for faster games) so that the recipient can correct error in its predictions and interpolations.
If client see events happening at the rate the server is feeding him, which is the normal way to do it (I've worked with protocols of Ultima Online, KalOnline and a little bit of World of Warcraft), then this momentaneous 5 secounds delay would just make him receive this 5 secounds of events all at once and see those events passing really fast or near instantly, as other players would see him "walking" really fast for a short distance if his outputs delay too. After that everything flows normally again. Actually, except for graphic and physics normalization, I can't see any special needs to make it synchronize properly, it just synchronize itself.
If you ever played Valve games in two near computers you would notice they don't care much about minor details like "the exact place where you died" or "where you dead body gibs flyed to". It is all up to client side and totally affected by latency, but this is irrelevant.
After all, lagged players must accept their condition, or close their damn eMule.
Your best option is to send the changes back to the client from the future, thereby arriving at the client at the same point in time it does for other clients that does not have lag problems.

factory floor simulation

I would like to create a simulation of a factory floor, and I am looking for ideas on how to do this. My thoughts so far are:
• A factory is a made up of a bunch of processes, some of these processes are in series and some are in parallel. Each process would communicate with it's upstream and downstream and parallel neighbors to let them know of it’s through put
• Each process would it's own basic attributes like maximum throughput, cost of maintenance as a result of through put
Obviously I have not fully thought this out, but I was hoping somebody might be able to give me a few ideas or perhaps a link to an on line resource
update:
This project is only for my own entertainment, and perhaps learn a little bit alnong the way. I am not employed as a programmer, programming is just a hobby for me. I have decided to write it in C#.
Simulating an entire factory accurately is a big job.
Firstly you need to figure out: why are you making the simulation? Who is it for? What value will it give them? What parts of the simulation are interesting? How accurate does it need to be? What parts of the process don't need to be simulated accurately?
To figure out the answers to these questions, you will need to talk to whoever it is that wants the simulation written.
Once you have figured out what to simulate, then you need to figure out how to simulate it. You need some models and some parameters for those models. You can maybe get some actual figures from real production and try to derive models from the figures. The models could be a simple linear relationship between an input and an output, a more complex relationship, and perhaps even a stochastic (random) effect. If you don't have access to real data, then you'll have to make guesses in your model, but this will never be as good so try to get real data wherever possible.
You might also want to consider to probabilities of components breaking down, and what affect that might have. What about the workers going on strike? Unavailability of raw materials? Wear and tear on the machinery causing progressively lower output over time? Again you might not want to consider these details, it depends on what the customer wants.
If your simulation involves random events, you might want to run it many times and get an average outcome, for example using a Monte Carlo simulation.
To give a better answer, we need to know more about what you need to simulate and what you want to achieve.
Since your customer is yourself, you'll need to decide the answer to all of the questions that Mark Byers asked. However, I'll give you some suggestions and hopefully they'll give you a start.
Let's assume your factory takes a few different parts and assembles them into just one finished product. A flowchart of the assembly process might look like this:
Factory Flowchart http://img62.imageshack.us/img62/863/factoryflowchart.jpg
For the first diamond, where widgets A and B are assembled, assume it takes on average 30 seconds to complete this step. We'll assume the actual time it takes the two widgets to be assembled is distributed normally, with mean 30 s and variance 5 s. For the second diamond, assume it also takes on average 30 seconds, but most of the time it doesn't take nearly that long, and other times it takes a lot longer. This is well approximated by an exponential distribution, with 30 s as the rate parameter, often represented in equations by a lambda.
For the first process, compute the time to assemble widgets A and B as:
timeA = randn(mean, sqrt(variance)); // Assuming C# has a function for a normally
// distributed random number with mean and
// sigma as inputs
For the second process, compute the time to add widget C to the assembly as:
timeB = rand()/lambda; // Assuming C# has a function for a uniformly distributed
// random number
Now your total assembly time for each iGadget will be timeA + timeB + waitingTime. At each assembly point, store a queue of widgets waiting to be assembled. If the second assembly point is a bottleneck, it's queue will fill up. You can enforce a maximum size for its queue, and hold things further up stream when that max size is reached. If an item is in a queue, it's assembly time is increased by all of the iGadgets ahead of it in the assembly line. I'll leave it up to you to figure out how to code that up, and you can run lots of trials to see what the total assembly time will be, on average. What does the resultant distribution look like?
Ways to "spice this up":
Require 3 B widgets for every A widget. Play around with inventory. Replenish inventory at random intervals.
Add a quality assurance check (exponential distribution is good to use here), and reject some of the finished iGadgets. I suggest using a low rejection rate.
Try using different probability distributions than those I've suggested. See how they affect your simulation. Always try to figure out how the input parameters to the probability distributions would map into real world values.
You can do a lot with this simple simulation. The next step would be to generalize your code so that you can have an arbitrary number of widgets and assembly steps. This is not quite so easy. There is an entire field of applied math called operations research that is dedicated to this type of simulation and analysis.
What you're describing is a classical problem addressed by discrete event simulation. A variety of both general purpose and special purpose simulation languages have been developed to model these kinds of problems. While I wouldn't recommend programming anything from scratch for a "real" problem, it may be a good exercise to write your own code for a small queueing problem so you can understand event scheduling, random number generation, keeping track of calendars, etc. Once you've done that, a general purpose simulation language will do all that stuff for you so you can concentrate on the big picture.
A good reference is Law & Kelton. ARENA is a standard package. It is widely used and, IMHO, is very comprehensive for these kind of simulations. The ARENA book is also a decent book on simulation and it comes with the software that can be applied to small problems. To model bigger problems, you'll need to get a license. You should be able to download a trial version of ARENA here.
It maybe more then what you are looking for but visual components is a good industrial simulation tool.
To be clear I do not work for them nor does the company I work for currently use them, but we have looked at them.
Automod is the way to go.
http://www.appliedmaterials.com/products/automod_2.html
There is a lot to learn, and it won't be cheap.
ASI's Automod has been in the factory simulation business for about 30 years. It is now owned by Applied Materials. The big players who work with material handling in a warehouse use Automod because it is the proven leader.

Categories

Resources