Handling Population within a game (unity) - c#

I am currently working on a sim city style game, and I am currently looking into managing the population for the city. The script will eventually work as when a house is placed, that house adds to the capacity of how many people the city can house. When the construction is complete then I will add that number of citizen structs to a List of citizens.
However, imagine that the population reaches in excess of 1000, 10000 citizens. Will this be the optimal solution for controlling a large volume of citizens? Moreover, when a house is removed this will remove the amount for the population (removing from the list) thus leaving job vacancies. I eventually would like for the player to be able to shift focus so any buildings with enum category of the focus shift will mean the work force will fill in those jobs first. Again would using the List and Linq queries be the way to go or would there be a better solution found with something else?
public class City : MonoBehaviour
{
public List<Citizen> citizens = new List<Citizen>();
public List<Building> cityBuildings = new List<Building>();
// TODO (LINQ): Method for checking if a building has no employees and this employee is unemployed then assign this citizen to the building
}
public struct Citizen
{
public Building employedAt;
public bool CheckEmployment()
{
if (employedAt != null)
{
return true;
}
else
{
return false;
}
}
}

The answer is - as you may have expected - it depends. The LINQ-operations are usually quite fast unless you are talking millions of objects. However they will produce some garbage that has to be collected eventually. If you perform such operations every frame you may run into GC hiccups. If you run stuff not that often (e.g. only when a player places/removes a house, etc.) this approach should work fine.
If you need maximum performance you may want to have a look at the new DOTS architecture (aka. ECS) in Unity, which allows you to manage large quantities of data fast. That being said - premature optimization is the root of all evil and DOTS is quite the beast to wrap your head around.
I'd start with the LINQ queries, making sure they are not called every frame and maybe some clever caching and only bring in the big guns when I actually have a performance problem.

Related

How to distribute work equally among workers by total count and total value?

I have a problem tackling a distribution problem.
I have workers and workCases. Each workCase has an value. I need to distribute workCases so that all workers get equal number of cases with similar total value (if possible).
Number of total cases and workers is random.
What's the best way to solve this? I'm completely stuck.
My first idea was just to order them by value and just give them out like this:
public class WorkCase
{
public decimal Value { get; set; }
}
public class Worker
{
public List<WorkCase> Cases { get; set; }
}
public static void Sort(List<WorkCase> cases, List<Worker> workers)
{
cases = cases.OrderByDescending(c => c.Value).ToList();
var wCount = workers.Count;
int i = 0;
while (cases.Any())
{
workers[i].Cases.Add(cases.First());
if (i == workers.Count - 1)
i = 0;
else
i++;
}
}
But thats just not really fair towards the last worker.
Thanks for help.
As Morinator already addressed, this is a variation on the knapsack problem and no perfect solutions exist (other than sheer brute forcing and being lucky enough to have numbers that fit perfectly).
But you can get reasonably close. It's important to note that bigger cases are less flexible than smaller cases. Using a real world example, if I want you to precisely fill a given container, it's easier to do using sand than it is to do using pebbles or even rocks.
This real world example actually helps a lot here. If you want to pack that container while maximizing the rock/sand ration (i.e. as many rocks as you can), you first fill the container with rocks and then fill the gaps using the sand.
You can use exactly the same approach here, which you already attempted: assign the largest cases first and the smallest cases last. However, your code suffers from bugs because you repeatedly assign the largest case instead of moving on to the next case.
Because you have multiple workers, a secondary consideration is relevant: divide the large cases among them as best as you can. The easiest way to do this is to always assign a case to worker with the currently lowest work load (and in cases of ties, it doesn't matter who you pick, just pick the first of the tied workers).
Fixing your code:
public static void Sort(List<WorkCase> cases, List<Worker> workers)
{
cases = cases.OrderByDescending(c => c.Value).ToList();
foreach(var case in cases)
{
// Find the worker with the lowest case load
var workersByCaseLoad = workers.OrderBy(w => w.Cases.Sum(c => c.Value);
var workerWithLowestCaseLoad = workersByCaseLoad.First();
// Assign this case to that worker
workerWithLowestCaseLoad.Cases.Add(case);
}
}
This won't always net you a perfect solution with exactly matching case loads, but it's a reasonable approximation. There are some fringe examples where the outcome isn't optimal but those cases are rare.
To avoid these fringe cases, the complexity of your code would have to dramatically increase. In most situations, the cost isn't worth the benefit.
Do note that this is not the most performant possible solution as it involves many collection iterations. But assuming a reasonable amount of workers and case loads (let's say within one company as a spitballed boundary), given today's hardware it shouldn't be a problem. Some optimization can be done by manually tracking the total case load for each worker, something along the lines of:
var workersByCaseLoad = workers.OrderBy(w => w.TotalCaseLoad);
var workerWithLowestCaseLoad = workersByCaseLoad.First();
workerWithLowestCaseLoad.Cases.Add(case);
workerWithLostCaseLoad.TotalCaseLoad += case.Value;
It's not as clean (it requires you to manually handle the values and keep this in perfect sync at all times), but it does prevent having to iterate over each worker's assigned cases every time.
Interestingly, this system also works reasonably well in cases where the full case list is not known at the start of the processing (which means you can't sort the cases). As long as you assign the next case to the person with the lowest load, it will remain a similarly fair game.
You may end up with a less perfect solution if your last few cases were disproportionately large. Think of it this way: you've kept things balanced, and then one more massive case must be assigned. That's always going to cause problems.
But if you can't know the case list in advance, then you can't expect to sort them, and then you get a less-perfect-but-still-reasonably-balanced outcome.
This problem sounds like it could be NP-hard. Look at the Knapsack-Problem, which is similar. If you weren't restricted on the number of cases for each worker, you could sort the workCases descending by value and then always assign the next workCase to the worker with the lowest current load. Note that even this algorithm doesn't necessarily produce an optimal result.
Another thing you could try is to to start off by assigning every worker the right count of random jobs and then repeadedly find the workers with the lowest and highest load and let them swap a heavy job from the worker with low load and a light job from the worker with big load.
Note that also this solution is only a heuristic that may not produce optimal results.
But again, this problem seems like it has no fast perfect solution, try to find a NP-hard problem and reduce it to your problem to show that is it not solvable (for you right now).

Iterating through Dictionary/List realtime every few ms

So I have a simple code like this:
public void OnGUI
{
if (this.myDict.Count > 0)
{
//blah
foreach (GameObject gameObject in new List<GameObject>(this.myDict.Keys))
{
//keep stuff up to date
}
}
}
onGUI is Unity's default method that updates every frame (really does not matter how often in this case when we iterate at the level of every frame or slightly lower, stop moving goalposts). And I NEED to keep data up-to-date as often, it is realtime list, but at the same time I NEED to be able to modify Dictionary (add/remove). Now, some random people told me that this foreach which allocates (?) new List every time is a massive memory leak and bad, but when asked why and how they weren't able to explain. I did some googling but result were inconclusive, I found some examples marked as "efficient" which used similar approach.
I'm greatly confused - is there a better way? Shall I define List as a global variable and use it as a temporal bin instead of defining local one each time? Would it matter though?

XNA how to write a good collision algorithm?

I'm currently working on a new game in XNA and I'm just setting basic stuff up like sprites/animations, input, game objects etc.
Meanwhile I'm trying to think of a good way to detect collisions for all the game objects, but I can't really think of a fast algorithm, which limits the game to very few objects.
This is what I did in my last project that was a school assignment
public static void UpdateCollisions()
{
//Empty the list
AllCollisions.Clear();
//Find all intersections between collision rectangles in the game
for (int a = 0; a < AllGameObjectsWithCollision.Count; a++)
{
GameObject obja = AllGameObjectsWithCollision[a];
for (int b = a; b < AllGameObjectsWithCollision.Count; b++)
{
GameObject objb = AllGameObjectsWithCollision[b];
if (obja.Mask != null & objb.Mask!= null && obja != objb && !Exclude(new Collision(obja, objb)))
{
if (obja.Mask.CollisionRectangle.Intersects(objb.Mask.CollisionRectangle))
AllCollisions.Add(new Collision(obja, objb));
}
}
}
}
So it checks for all collisions between all objects, but excludes adding collisions I found unnecessary.
To then inform my objects that they are colliding, I used a virtual method OnCollision that I called like this
//Look for collisions for this entity and if a collision is found, call the OnCollision method in this entity
var entityCol = FindCollision(entity);
if (entityCol != null)
{
if (entityCol.Other == entity)
entityCol = new Collision(entity, entityCol.Obj1);
entity.OnCollision(entityCol);
}
Collision FindCollision(GameObject obj)
{
Collision collision = AllCollisions.Find(delegate (Collision col) { return (GameObject)col.Obj1 == obj || (GameObject)col.Other == obj; });
return collision;
}
This made my game run slower fairly quickly when the amount of gameobjects increased.
The first thing that pop ups in my head is to create new threads, would that be a good idea & how would I do that in a good way?
I have studied algorithms a little bit so I know basic concepts and how ordo works.
I'm fairly new to c# and programming overall, so don't go too advanced without explaining it. I learn quick though.
There are quite a few things you can do:
The nested for loops produce a quadratic running time, which grows rather quickly as the number of objects increase. Instead, you could use some acceleration data structures (e.g. grids, kd-trees, BVHs). These allow you to reduce the intersection checks to only the entities that can potentially intersect.
If you can order your entities, you can reduce the collision checks by half by just checking entity pairs, where the second is "greater" (with respect to the order) than the first. I.e. you do not need to check b-a if you already checked a-b.
This part can be potentially slow:
!Exclude(new Collision(obja, objb)))
If Collision is a class, then this will always allocate new memory on the heap and recycle it eventually. So it might be better to make Collision a struct (if it is not already) or pass obja and objb directly to Exclude(). This also applies to your other uses of Collision. You haven't shown the implementation of Exclude() but if it is a simple linear search, this can be improved (e.g. if you search linearly in a list that contains an entry for every object, you already have a cubic running time for just the loops).
The implementation of FindCollision() is most likely a linear search (depending on what AllCollisions is) and it can get quite slow. Why are you storing the collisions anyway? Couldn't you just call OnCollision() as soon as you found the collision? A different data structure (e.g. a hash map) would be more appropriate if you want to check if a collision is associated with a specific entity efficiently.
Finally, parallelizing things is surely a viable option. However, this is somewhat involved and I would advice to focus on the basics first. If done correctly, you can reduce your cubic running time to n log n or even n.

Rx.NET 'Distinct' to get the lastest value?

I'm new to Rx and I'm trying to make a GUI to display stock market data. The concept is a bit like ReactiveTrader, but I'll need to display the whole "depth", i.e., all prices and their buy/sell quantities in the market instead of only the "top level" of the market buy/sells, sorting by price.
The data structure for each "price level" is like this:
public class MarketDepthLevel
{
public int MarketBidQuantity { get; set; }
public decimal Price { get; set; }
public int MarketAskQuantity { get; set; }
}
And underneath the GUI, a socket listens to network updates and return them as an Observable:
IObservable<MarketDepthLevel> MarketPriceLevelStream;
Which after transformed into a ReactiveList, eventually bound to a DataGrid.
The transformation would basically choose the latest updates of each price level, and sort them by price. So I come up with something like this:
public IReactiveDerivedList<MarketDepthLevel> MarketDepthStream
{
get
{
return MarketDepthLevelStream
.Distinct(x => x.Price)
.CreateCollection()
.CreateDerivedCollection(x => x, orderer: (x, y) => y.Price.CompareTo(x.Price));
}
}
But there are problems:
When 'Distinct' sees a same price as appeared before, it discards the new one, but I need the new one to replace the old ones (as they contain the lasted MarketBidQuantity/MarketAskQuantity)
It seems a bit clumsy to CreateCollection/CreateDerivedColleciton
Any thoughts on solving these (especially the 1st problem)?
Thanks
Just group the items and then project each group to be the last item in the group:
return MarketDepthLevelStream
.GroupBy(x => x.Price, (key, items) => items.Last());
If I understand you correctly, you want to project a stream of MarketDepthLevel updates in a list of the latest bid/ask quantities for each level (in finance parlance, this is a type of ladder). The ladder is held as a ReactiveList bound the UI. (ObserveOn may be required, although ReactiveList handles this in most cases I believe)
Here's an example ladder snapped from http://ratesetter.com, where the "Price" is expressed as a percentage (Rate), and the bid/ask sizes by the amount lenders want and borrowers need at each price level:
At this point, I begin to get slightly lost. I'm confused as to why you need any further Rx operators, since you could simply Subscribe to the update stream as is and have the handler update a fixed list data-bound to a UI. Doesn't each new event simply need to be added to the ReactiveList if it's a new price, or replace an existing entry with a matching price? Imperative code to do this is fine if it's the last step in the chain to the UI.
There is only value in doing this in the Rx stream itself if you need to convert the MarketDepthLevelStream into a stream of ladders. That could be useful, but you don't mention this need.
Such a need could be driven by either the desire to multicast the stream to many subscribers, and/or because you have further transformations or projections you need to make on the ladders.
Bear in mind, if the ladder is large, then working with whole ladder updates in the UI might give you performance issues - in many cases, individual updates into a mutable structure like a ReactiveList are a practical way to go.
If working with a stream of ladders is a requirement, then look to Observable.Scan. This is the work-horse operator in Rx that maintains local state. It is used for any form of running aggregate - such as a running total, average etc., or in this case, a ladder.
Now if all you need is the static list described above, I am potentially off on a massive detour here, but it's a useful discussion so...
You'll want to think carefully about the type used for the ladder aggregate - you need to be concious of how down-stream events would be consumed. It's likely to need an immutable collection type so that things don't get weird for subscribers (each event should be in effect a static snapshot of the ladder). With immutable collections, it may be important to think about memory efficiency.
Here's a simple example of how a ladder stream might work, using an immutable collection from nuget pre-release package System.Collections.Immutable:
public static class ObservableExtensions
{
public static IObservable<ImmutableSortedDictionary<decimal, MarketDepthLevel>>
ToLadder(this IObservable<MarketDepthLevel> source)
{
return source.Scan(ImmutableSortedDictionary<decimal, MarketDepthLevel>.Empty,
(lastLadder, depthLevel) =>
lastLadder.SetItem(depthLevel.Price, depthLevel));
}
}
The ToLadder extension method creates an empty immutable ladder as the seed aggregate, and each successive MarketDepthLevel event produces a new update ladder. You may want to see if ImmutableSortedSet is sufficient.
You would probably want to wrap/project this into your own type, but hopefully you get the idea.
Ultimately, this still leaves you with the challenge of updating a UI - and as mentioned before now you are stuck with the whole ladder, meaning you have to bind a whole ladder every time, or convert it back to a stream of individual updates - and it's getting way to off topic to tackle that here...!

C#'s `yield return` is creating a lot of garbage for me. Can it be helped?

I'm developing an Xbox 360 game with XNA. I'd really like to use C#'s yield return construct in a couple of places, but it seems to create a lot of garbage. Have a look at this code:
class ComponentPool<T> where T : DrawableGameComponent
{
List<T> preallocatedComponents;
public IEnumerable<T> Components
{
get
{
foreach (T component in this.preallocatedComponents)
{
// Enabled often changes during iteration over Components
// for example, it's not uncommon for bullet components to get
// disabled during collision testing
// sorry I didn't make that clear originally
if (component.Enabled)
{
yield return component;
}
}
}
}
...
I use these component pools everywhere - for bullets, enemies, explosions; anything numerous and transient. I often need to loop over their contents, and I'm only ever interested in components that are active (i.e., Enabled == true), hence the behavior of the Components property.
Currently, I'm seeing as much as ~800K per second of additional garbage when using this technique. Is this avoidable? Is there another way to use yield return?
Edit: I found this question about the broader issue of how to iterate over a resource pool without creating garbage. A lot of commenters were dismissive, apparently not understanding the limitations of the Compact Framework, but this commenter was more sympathetic and suggested creating an iterator pool. That's the solution I'm going to use.
The implementation of iterators by the compiler does indeed use class objects and the use (with foreach, for example) of an iterator implemented with yield return will indeed cause memory to be allocated. In the scheme of things this is rarely a problem because either considerable work is done while iterating or considerably more memory is allocated doing other things while iterating.
In order for the memory allocated by an iterator to become a problem, your application must be data structure intensive and your algorithms must operate on objects without allocating any memory. Think of the Game of Life of something similar. Suddenly it is the iteration itself that overwhelms. And when the iteration allocates memory a tremendous amount of memory can be allocated.
If your application fits this profile (and only if) then the first rule you should follow is:
avoid iterators in inner loops when a simpler iteration concept is available
For example, if you have an array or list like data structure, you are already exposing an indexer property and a count property so clients can simply use a for loop instead of using foreach with your iterator. This is "easy money" to reduce GC and it doesn't make your code ugly or bloated, just a little less elegant.
The second principle you should follow is:
measure memory allocations to see when and where you should use with the first rule
Just for grins, try capturing the filter in a Linq query and holding onto the query instance. This might reduce memory reallocations each time the query is enumerated.
If nothing else, the statement preallocatedComponents.Where(r => r.Enabled) is a heck of a lot less code to look at to do the same thing as your yield return.
class ComponentPool<T> where T : DrawableGameComponent
{
List<T> preallocatedComponents;
IEnumerable<T> enabledComponentsFilter;
public ComponentPool()
{
enabledComponentsFilter = this.preallocatedComponents.Where(r => r.Enabled);
}
public IEnumerable<T> Components
{
get { return enabledComponentsFilter; }
}
...

Categories

Resources