I have a game that I have been developing and it requires the OnTriggerEnter() to be called many times with many different GameObjects. Although it is called a lot, I don't need it to be called extremely frequently. So, I was wondering if it is possible to lower how often the method is called such as calling it every other update or even less often than that. Is this possible?
You could check trigger collisions manually with functions like Physics.CheckBox.
Put it in a slow coroutine and cache the result of the collision, so you can always check the last state of it.
Still in early phase of learning both Unity and C#, but this came up in an excercise and might be the answer to your problem ?
from :https://docs.unity3d.com/Manual/Coroutines.html
By default, a coroutine is resumed on the frame after it yields but it is also possible to introduce a time delay using WaitForSeconds:
This can be used as a way to spread an effect over a period of time, but it is also a useful optimization. Many tasks in a game need to be carried out periodically and the most obvious way to do this is to include them in the Update function. However, this function will typically be called many times per second. When a task doesn’t need to be repeated quite so frequently, you can put it in a coroutine to get an update regularly but not every single frame. An example of this might be an alarm that warns the player if an enemy is nearby.
To set a coroutine running, you need to use the StartCoroutine function:
Related
Sometimes I see how Unity programmers use one script that inherits MonoBehavior for almost the entire project. The so-called "Update Managers". All scripts are subscribed to the queue for execution, and the manager runs all functions, and after execution removes them from the queue.
Does this really have any effect on optimization?
This was one of the optimization technique I analyzed in my thesis.
The Unity engine has a Messaging system which allows the developers to define methods that will be called by an internal system based on their functionalities. One of the most commonly used Messages is the Update message. Unity is inspecting every MonoBehaviour the first time the type is accessed (independently from the scripting backend (mono, il2cpp)) and checks if any of the Message methods are defined. If a Message method is defined then the engine will cache this information. Then if an instance of this type is instantiated then the engine will add it to the appropriate list and will call the method whenever it should. This is also the key reason why Unity does not care about the visibility of our Message method, and that they are not called in a deterministic order.
public class Example1 : MonoBehaviour
{
private void Update() { }
}
public class Example2: MonoBehaviour
{
public void Update() { }
}
Both of the above achieves the same results but god knows which will be called first.
One of the main problem with this approach is that every time the engine calls a Message method an interop call (a call from c/c++ side to the managed c# side) has to happen. In case of Update luckily no marshaling is needed so this overhead is a bit smaller. However, if our game handles thousand or tens of thousands of objects which all have a script requiring a Message call then this overhead can be significant. A solution to this is to avoid interop calls. A good approach to this is behavior grouping. If we have a MonoBehaviour that is attached to a huge number of GameObjects we can cut the number of interop calls to just one by introducing an update manager. Since the update manager is also a managed object running managed code the only interop call will happen between the update manager’s Update Message and the Unity engine’s internal Message handler. We have to note the fact that this optimization technique is only relevant in large scale projects, and the frame time saved via this technique is more impactful when using the Mono scripting backend. (Remember IL2CPP transpiles to C++).
The above picture illustrates the difference between the two methods.
Let's do a benchmark with Unity's performance tools. The benchmark will spawn 10 000 gameobjects each with a mover script which moves these cubes up and down.
Illustration of the example scene using the traditional method.
Now let's see the results of the bechmarks.
Not surprisingly IL2CPP leading the competition by far however it’s still interesting that the Update Manager is still twice as fast as the traditional way. If we profiled the execution of the Traditional method’s IL2CPP build we would find many Unity specific calls like check if the GameObject exists before invoking a component method etc. and these would explain the longer execution time. We can also make the conclusion that IL2CPP in this case is far faster than Mono, usually around twice as fast. The benchmark ran for one minute prior to a 5 seconds warmup and both scripting backends had the ideal compiler setting.
Based on this article which your link has a link to, having an "update manager" does indeed have a positive impact on performance when compared to using Unity's Update method. The gist is that if you implement Update in one of your classes, Unity has some additional overhead in calling Update; it doesn't run quite as fast as calling a method yourself, such as saying myObject.Update(). So if you're calling Unity's Update on 10,000 game objects per frame, that additional overhead becomes noticeable.
If you explicitly call your update-type methods from a "manager" class -- rather than letting Unity call the "magic" Update methods -- then you can avoid the additional overhead that comes with using "magic" methods.
But keep in mind that the performance penalty of using Update will only be noticeable if you have a lot of game objects in your scene that all implement Update. Game objects in your scene that don't implement Update won't have an effect. It would be good practice to remove the Update method that Unity adds to all new scripts if you're not using it though.
In short, unless you have a huge number of objects in your scene and you are running into performance problems, I wouldn't worry about it.
I am developing in Unity using C#, and would like to ask if it is appropriate to use IEnumerator Coroutines to determine an application's execution of user logic? And if there are any other optimal solutions to implement this.
To clarify as a series of strict sequential actions...
User triggers GUI action.
Open Form, and waits until it is filled out.
Begins checking if winning condition is satisfied in task (4.), if completed, jump to (5.)
Series of sequential tasks for user to complete, returns a result back to (3.) after a cycle is complete. Keep cycling between stages (3.) and (4.) until winning condition is satisfied.
Winning condition is met, inform user of final result. Exit co-routine.
I do hope that my logic is sound, and apologies for the vagueness of the specific task involved.
Using coroutines is completely fine for the right reasons. Although from what i read what you are making is a manager that checks the state of the game.
What i would do in that case is having a simple manager that doesn't utilize the update loop or coroutines at all, instead any gameobject which can change the state of the game tells the manager about the change. By basically turning it around the manager does not have to know a single game object and also doesn't use any performance to check all the relevant objects.
Using this architecture you could also add an event handler architecture which solves the problem of gameobjects knowing the manager, now you have a completely decoupled manager from the game :)
While what you have in mind is certainly possible, it will be hard to maintain once you will have multiple winning conditions, perhaps losing conditions, and more than one player.
If you think that this will be an issue for you, I suggest making a "game state" as a singleton, and allow different game objects to change the status within their respective Update() calls.
(I would also like to point out that creating singletons in unity is rather easy - during Awake() save the object itself as a static variable)
I'm writing a Mafia (Werewolf)-style game engine in C#. Writing out the logic of an extended mafia game, the model:
A Player (Actor) has one or more Roles, and a Role contains one or more Abilities. Abilities can be Static, Triggered, or Activated (similar to Magic the Gathering) and have an "MAction" with 0 or more targets (in which order can be important) along with other modifiers. Some occur earlier in the Night phase than others, which is represented by a Priority.
MActions are resolved by placing them in a priority queue and resolving the top one, firing its associated event (which can place more actions on the queue, mostly due to Triggered Abilities) and then actually executing a function.
The problem I see with this approach is that there's no way for an MAction to be Cancelled through its event in this mechanism, and I see no clear way to solve it. How should I implement a system so that either MActions can be cancelled or that responses with higher Priorities end up executing first/delaying the initial MAction?
Thanks in advance. I've spent quite some time thinking (and diagramming) this through, can't quite get over this hurdle.
Would it be possible to implement a cancellation stack that is checked by each MAction function and only executes if the MAction you are trying to execute is not in that stack. That way any time an action is popped it would only do something if it wasn't canceled already.
The situation as I understand it:
You have a series of things that happen with complicated rules which decide the order of what happens, and the order of what happens decides the quality/magnitude of the effect.
First things first, in order to make your life easier I'd recommend you limit your players to making all their moves before action resolution takes place. Even if this model is abandoned later it should make it easier for you to debug and resolve the actions. This is especially true if later actions can undo the effects of earlier actions like in the following example:
Dave transforms to a werewolf because he triggers the Full Moon ability. Then with werewolf powers he jumps over a wall and bites Buffy. Before Buffy dies she activates her time reverse ability and kills Dave before he jumps over the wall.
Regardless, your dilemma makes me think that you need to use a rules engine like NRules1, or implement your own. The primary goal of the rules engine will be to order/discard the stuff that happens according to your business logic.
Next you put these actions into a queue/list. The actions are applied against the targets until the business rules tell you to stop (Werewolf Dave died) or there aren't any more actions to apply. Once you stop then the results of the battle/actions are reported to the user.
There are other ways to accomplish your goals but I think this will give you a viable pathway towards your end goal.
1: I've never used this library so I don't know if it is any good.
I have an app that needs to fire off a couple of events at certain times during the day - the times are all defined by the users. I can think of a couple of ways of doing it but none of them sit too well. The timing doesn't have to be of a particularly high resolution - a minute or so each way is fine.
My ideas :
When the app starts up read all the times and start timers off that will Tick at the appropriate time
Start a timer off that'll check every minute or so for 'current events'
tia for any better solutions.
Store/index the events sorted by when they next need attention. This could be in memory or not according to how many there are, how often you make changes, etc. If all of your events fire once a day, this list is basically a circular buffer which only changes when users change their events.
Start a timer which will 'tick' at the time of the event at the head of the list. Round up to the next minute if you like.
When the timer fires, process all events which are now in the past [edit - and which haven't already been processed], re-insert them into the list if necessary (i.e. if you don't have the "circular buffer" optimisation), and set a new timer.
Obviously, when you change the set of events, or change the time for an existing event, then you may need to reset the timer to make it fire earlier. There's usually no point resetting it to fire later - you may as well just let it go off and do nothing. And if you put an upper limit of one minute on how long the timer can run (or just have a 1 minute recurring timer), then you can get within 1-minute accuracy without ever resetting. This is basically your option 2.
Arguably you should use an existing framework rather than rolling your own, but I don't know C# so I have no idea what's available. I'm generally a bit wary of the idea of setting squillions of timers, because some environments don't support that (or don't support it well). Hence this scheme, which requires only one. I don't know whether C# has any problems in that respect, but this scheme can easily be arranged to use O(1) RAM if necessary, which can't be beat.
Have a look at Quartz.Net. It is a scheduler framework (originally for Java).
This sounds like a classic case for a Windows Service. I think there is a Windows Service project type in VS2005/2008. The service coupled with a simple database and a front-end application to allow users to set the trigger times would be all you need.
If it won't change very often, Scheduled Tasks is also an option.
I've written a few programs along these lines.
I suggest #2. All you need to to is keep a list of times that events are "due" at, and every X amount of time (depending on your resolution) check your list for "now" events. You can pick up some optimization if you can guarantee the list is sorted, and that each event on the list is due exactly once. Otherwise, if you have recurring events, you have to make sure you cover your window. What I mean is, if you have an event that is due at 11:30 am, and you're checking every seconds, then it's possible that you could check at 11:29:59, and then not again until 11:31:01, due to the inprecision of the CPU time-slices. So you'll need to be sure that one of those checks (11:29 or 11:31) still picks up the 11:30 hit, and that ONLY one of them does (i.e., you don't run at both 11:29 and 11:31).
The advantage this approach has over checking only on times you know to be on your list is that allows your list to be modified by 3rd parties without your knowledge, and your event handler will continue to 'just work'.
The simplest way would likely be to use Windows scheduler.
Otherwise you need to use one of the Timer classes, calculating how long until the first event. This approach, unlike the scheduler, allows new events to be found by the running process (and, possibly, resetting the timer).
The problem with #1 is that the number of milliseconds before an event may be too large to store in the Timer's interval, and as the number of events increase, your number of timers could get unweildly.
I dont see anything wrong with #2, but I would opt for a background worker or a thread.
I haven't programmed games for about 10 years (My last experience was DJGPP + Allegro), but I thought I'd check out XNA over the weekend to see how it was shaping up.
I am fairly impressed, however as I continue to piece together a game engine, I have a (probably) basic question.
How much should you rely on C#'s Delegates and Events to drive the game? As an application programmer, I use delegates and events heavily, but I don't know if there is a significant overhead to doing so.
In my game engine, I have designed a "chase cam" of sorts, that can be attached to an object and then recalculates its position relative to the object. When the object moves, there are two ways to update the chase cam.
Have an "UpdateCameras()" method in the main game loop.
Use an event handler, and have the chase cam subscribe to object.OnMoved.
I'm using the latter, because it allows me to chain events together and nicely automate large parts of the engine. Suddenly, what would be huge and complex get dropped down to a handful of 3-5 line event handlers...Its a beauty.
However, if event handlers firing every nanosecond turn out to be a major slowdown, I'll remove it and go with the loop approach.
Ideas?
If you were to think of an event as a subscriber list, in your code all you are doing is registering a subscriber. The number of instructions needed to achieve that is likely to be minimal at the CLR level.
If you want your code to be generic or dynamic, then you're need to check if something is subscribed prior to calling an event. The event/delegate mechanism of C# and .NET provides this to you at very little cost (in terms of CPU).
If you're really concerned about every clock cycle, you'd never write generic/dynamic game logic. It's a trade off between maintainable/configurable code and outright speed.
Written well, I'd favour events/delegates until I could prove it is an issue.
The only way you'll truly know if it is an issue for you is by profiling your code -- which you should do anyway for any game development!
It's important to realize that events in C# are not queued asynchronous events (like, for example the Windows message queue). They are essentially a list of function pointers. So raising an event doesn't have worse performance implications than iterating through a list of function pointers and calling each one.
At the same time, realize that because of this, events are synchronous. If your event listener is slow, you'll slow down the class raising the events.
The main question here seems to be:
"What is the overhead associated with using C# Delegates and Events?"
Events have little significant overhead in comparison to a regular function call.
The use of Delegates can create implicit and thus hidden garbage. Garbage can be a major cause performance problems especially on the XBox360.
The following code generates around 2000 bytes of garbage per second (at 60 fps) in the form of EntityVisitor objects:
private delegate void SpacialItemVisitor(ISpacialItem item);
protected override void Update(GameTime gameTime)
{
m_quadTree.Visit(ref explosionCircle, ApplyExplosionEffects);
}
private void ApplyExplosionEffects(ISpacialItem item)
{
}
As long as you avoid generating garbage, delegates are fast enough for most purposes. Because of the hidden dangers, I prefer to avoid them and use interfaces instead.
In my extra time away from real work, I've been learning XNA too.
IMHO (or not so humble if you ask my coworkers) is that the overhead of the event handles will be overwhelmed by other elements in the game such as rendering. Given the heavy use of events in normal .Net programming I would be the underlying code is well optimized.
To be honest, I think going to an UpdateCameras method might be a premature optimization. The event system probably has more uses other than the camera.
XNA encourages the use of interfaces, events and delegates to drive something written with it. Take a look at the GameComponent related classes which set this up for you.
The answer is, "As much as you feel comfortable with".
To elaborate a little bit, If for example you take and inherit from the gamecomponent class into a cameracontroller class and add it to the Game.Component collection. Then you can create your camera classes and add them to your cameracontroller.
Doing this will cause the cameracontroller to be called regularly and be able to select and activate the proper camera or multiple cameras if that is what you are going for.
Here is an example of this (All of his tutorials are excellent):
ReoCode
As an aside, you might be interested to know that Shawn Hargreaves, original developer of Allegro, is one of the main developers on the XNA team :-)
Before going into what is the impact of an event in terms of performance you must first evaluate whether or not it is needed.
Assuming you are really trying to keep a chase cam updated and its not just an example, what you are looking for is not an event (though events might do the trick just as well), if you are following an avatar likelihood is it will be moving most of the time.
One approach I found extremely effective is to use hierarchic transformations, if you implement this efficiently the camera won't be the only object to benefit from such a system, the goal would be to keep the camera within the coordinate space of the object it is tracking.
That approach is not the best one if you want to apply some elasticity to the speed and ways in which the camera tracks the object, for that, it is best to use an update call, a reference, and some basic acceleration and resistance physics.
Events are more useful for things that only happen from time to time or that affect many different aspects of the application, like a character dying, probably many different systems would like to be aware of such an event, kill statistics, the controlling AI, and so on, in such a case, keeping track of all the objects that would be have to constantly check if this has happened is far less effective than throwing an event and having all the interested objects be notified only when it happens.