How can we reduce the power consumption of our iOS game? - c#

We just developed an iOS game and users have been complaining that it drains their battery power. It plays at 60 frames-per-second and uses a proprietary gaming engine (written in C#). May one of those be the issue or are there other common factors that should be investigated first?

Apple have some guidelines on reducing power consumption in their iOS programming guide
Good place to get started on some tips.

Firstly, run the code through Instruments and see how it effects CPU usage (constant high CPU will drain the battery). Also, do you use any device features such as GPS or WIFI? These will drain the battery further.
Secondly, do you run any background processes when your app should suspend that might be eating away at battery?
You can keep track of any performance you enhance by checking device logs for power consumption, making a change and saving another log.
follow these instructions to accomplish this

There may be one simple answer, try running your game at 30, or maybe even as low as 24 FPS. Is there any REAL reason you need to be running it SO fast???
I stated 24 by the way as it is "technically" the fastest your eyes (for the majority of human beings) can detect.
In video, we try to go higher because there are artifacts that can be seen from the recording process, but because games have generated scenes, generally you dont NEED to go higher than 24.

A good first step would be to reduce the frame-rate to 30 FPS.
For any reasonable game, 60 FPS is overkill. At some point, your eyes just cannot tell the difference, unless the frame-rate is skipping. That rate occurs at about 24-30 FPS, and that's why It's most used for videos & gaming.
I would caution you though, if you do have a real-time game (especially one based on reflexes), that you do your game logic on another thread. If you do not, you could have flaw that other games have, for example:
Call Of Duty: Modern Warfare 3 has a major flaw in it's engine design in that the fire rate that your gun shoots is determined solely by the frame rate of the game, and not by a background thread.
This makes certain weapons more dangerous than others, because at a frame rate of about 60 FPS, they have an integer amount of time per shot.
So with that said, just try reducing your framerate. That alone is probably what's eating most of the battery.

Related

Unity's Universal Render Pipeline performance problem? (2D)

I never got real answer for this. I open this new empty project (total blank but with one GameObject)
I ended up getting 2500FPS, alright? Makes sense. The moment I install URP and then you know place in the graphics asset. BAM 1100FPS. What's going on? I put lights, well FPS keeps on decreasing. Wasn't this supposed to be better than the normal standard pipeline? If I can't do this, what are the other ways to make these optimized lights? What can I do for it?
P.S. I have been using since LWRP and then recently changed to URP. It's been at least a year, I feel like I cannot make my own game using these lights and I don't know what to do with it. This is the performance loss after just installing it, think when you have to add logic? Well, that is bad. Do I look for certain settings for this?
If your game is running above 60FPS you are OK. you do not have to worry about performance at this stage. Add assets according to your game design, tweak your scene with lights, Unity offers my tools than can improve the performance of the game. But I suggest you do profiling after you build something concrete.
You shouldn't really look at frame-rate for profiling, you need to look at the time per frame not the FPS count. Remember that the FPS is logarithmic, not linear. 2,500 FPS is a frame render of 0.4ms where 1,100 FPS is 0.9ms. The difference in this time is so small that it's almost not worth comparing. Also, target should be 60 - 144 FPS (depending on screen refresh-rate) which is a frame-time of 16.67ms - 6.9ms.
Also you need to remember that URP is forward-rendered which means the cost of each light in the scene goes up dramatically vs in Deferred in Standard.

How much memory usage is too much for an XNA game?

I'm making a game in XNA 4.0 and C#. Once I've completed the overall gameplay, I'm going to work on optimising the code so that it won't use too much memory. But that got me thinking: How much memory usage is too much? The game isn't finished yet, but I wanted to check its memory usage in its current state using Task Manager. According to Task Manager, the memory usage mostly stays at a steady 85 MB, but it can also increase to 95-100 MB at certain points (such as when playing a level with multiple objects to update and textures to draw). I don't think that the memory usage has ever been above 101 MB. For a game made in XNA and C#, is 100 MB or even 85 MB considered to be too much? If it is, what's the target amount that I should try to decrease it to?
Also, the game is intended to be played on Windows PCs, rather than something like Android in case that's helpful. :)

Unity3D large maps advice - How big is too big?

Using the Unity3D engine. I'm making a multiplayer game for fun, using Unity's standard networking. If servers hold 25-50 players, what map size is recommended? How big can I make a very detailed map before it is too big for effective gameplay? How do I optimize a large map? Any ideas and advice would be great, I just want to learn and could not find anything about this on google :D
*My map is sliced into different parts.
The size of the map itself, in units, doesn't matter for performance at all. Just keep in mind that Unity (as any other game engine) uses floats for geometry, and when the float values get too high or too low, things can get funny.
What matters is the amount of data that your logic, networking and rendering engine have to churn through. These are different things, even logic/networking data, and limits on those greatly depend on architecture of your game.
Lets' talk about networking. There, two parameters are critical as your limits: bandwidth and latency. Bandwidth is how much data can you transfer, and latency is how fast. Ok, this explanation is confusing. Imagine a truck full of HDDs travelling from one city to another: it has gigantic bandwidth, and you can transfer entire data centers this way. But the latency, time for the signal to travel, is a few hours. On the other hand, two different people from these cities can hop on air balloons, look at each other in the night sky and turn their flashlights on and off. This way, they'll be able to exchange just one bit of information, but with the lowest possible latency: you can't get faster than light.
It also depends on how your networking works. RTS games, for example, often use lock-step multiplayer architecture can operate on thousands of units, but will only exchange a limited amount of data between users: their input commands. A first-person shooter, on the other hand, heavily relies on latency (which lock-stepping can damage): 10 ms when you jump and fire a rocket launcher are much more important than when you say your troops to attack. So, the networking logic is organised differently: every player's computer predicts what will happen, but the central server has authority on what actually happened. Of course, what I'm writing right now are just general examples of architectures that can be used; choosing the right way to do the networking is very difficult, but very interesting and creative task.
Now, logic itself. Actually, most of the gameplay logic used in modern games is relatively simple in terms of cpu requirements, unless it's physics or AI. Using physics in a multiplayer game is tricky enough on it's own, because of synchronisation problems (remember floats?); usually, the actual logic that can influence who wins and who loses is pretty simplified: level geometry is completely static, characters move using easy logic without real physical force, and the physics is usually limited just to collision detection. Of course, you see a lot of physical-based visual stuff: ragdolls of killed enemies falling down, rubble from explosion flying up; but these are typically de-synchronised between different computers and can't actually affect the gameplay itself.
And finally, rendering. Here, a lot of different constraints make place. To tell about them all, I would have to describe the whole rendering pipeline of Unity on different devices, and this is clearly out of scope of this question. Thankfully, there's another way! Instead of reasoning about this limit theoretically, just to a practical prototype. Put in different game assets in the scene, run it on target device and look how it performs. Adjust, repeat! These game assets can be completely ugly or irrelevant; however, they have to have the same technical properties as what you're going to use in the real game: number of polygons, sizes of textures, shaders, etc, etc. Let's say, for example, that you want to create a COD-like multiplayer shooter. To come up with your rendering requirements, just put in N environment models with N polygons each, using NxN textures, put in N characters with some skeleton animations with N bones, and also don't forget some fake logic that would emulate CPU-intenstive stuff so your perfomance measuring will be more realistic. Of course, it won't give you a final picture, but it'll be a good way to start, and it's great to do that before you start producing a lot of art assets.
Overall, game perfomance optimisation is a very broad and interesting theme, and it's impossible to give a precise answer to such a question.
You can improved this reducing the clipping plane of your camera to reduce the visible render distance, and too can use LOD improvement making your sliced part with minor details.
Check this link for more detail about LOD:
http://docs.unity3d.com/Manual/class-LODGroup.html
If you need more improvement you can make a script to load terrain in runtime based on a distance arround your player.
First and foremost: Make the gameplay work, optimize it later. Premature optimization is a waste of programmer's time.
Secondly: think of Skyrim and Minecraft. The world is separated into pieces that are loaded in background when you move around. Using that approach (chunking your world into pieces) you can have virtually infinite world size.

How does XNA timing work?

How does XNA maintain a consistent and precise 60 FPS frame rate? Additionally, how does it maintain such precise timing without pegging the CPU at 100%?
While luke’s code above is theoretical right the used methods and properties are not the best choices:
As the precision of DateTime.Now is only about 30ms (see C# DateTime.Now precision and give or take 20ms) its use for high performance timing is not advisable (60 FPS are 16ms). System.Diagnostics.Stopwatch is the timer of choice for real time .NET
Thread.Sleep suffers the same precision/resolution problem and is not guaranteed to sleep for the specified time only
The current XNA FX seems to hook into the Windows message loop and execute its internal pre-update each step and calling Game.Update only if it the elapsed time since the last update matches the specified framerate (e.g. each 16ms for the default settings). If you want to really know how the XNA FX does the job Reflector is your friend :)
Random tidbit: Back in the XNA GameStudio 1.0 Alpha/Beta time frame there were quite a few blog posts about the “perfect WinForms game loop”, albeit I fail to find them now…
I don't know specifically how XNA does it but when playing around with OpenGL a few years ago I accomplished the same thing using some very simple code.
at the core of it i assume XNA has some sort of rendering loop, it may or may not be integrated with a standard even processing loop but for the sake of example lets assume it isn't. in this case you could write it some thing like this.
TimeSpan FrameInterval = TimeSpan.FromMillis(1000.0/60.0);
DateTime PrevFrameTime = DateTime.MinValue;
while(true)
{
DateTime CurrentFrameTime = DateTime.Now;
TimeSpan diff = DateTime.Now - PrevFrameTime;
if(diff < FrameInterval)
{
DrawScene();
PrevFrameTime = CurrentFrameTime;
}
else
{
Thread.Sleep(FrameInterval - diff);
}
}
In reality you would probably use something like Environment.Ticks instead of DateTimes (it would be more accurate), but i think that this illustrates the point. This should only call drawScene about 60 times a second, and the rest of the time the thread will be sleeping so it will not incur any CPU time.
games should run at 60fps but that doesn't mean that they will. That's actually an upper limit for a released game.
If you run a game in debug mode you could get much higher frames per second - for example a blank starter template on my laptop in debug mode runs well over 1,000fps.
That being said the XNA framework in a released game will do its best to run at 60fps - but the code you included in your project has the chance of lowering that performance. For example having something constantly fire the garbage collection you normally would see a dip in the fps of the game, or throwing some complex math in the update or draw methods - thus having them fire every frame..which would usually be a bit excessive. There are a number of things to keep in mind to keep your game as streamlined as possible.
If you are asking how the XNA framework makes that ceiling happen - that I cant really explain - but I can say that depending on how you layout your code - and what you can do can definitely negatively impact this number, and it doesnt always have to be CPU related. In the instance of garbage Collection its just the cleaning up of a RAM which may not show a spike in CPU usage at all - but could impact your FPS, depending on the amount of garbage and interval it has to run.
You can read all about how the XNA timer was implemented here Game timing in XNA Game Studio but basicly it would try and wiat 1/60 of a second before continuing the loop again, also note that update can be called multiple times before a render if XNA needs to "catch up".

Why is my Stopwatch.Frequency so low?

Debug.WriteLine("Timer is high-resolution: {0}", Stopwatch.IsHighResolution);
Debug.WriteLine("Timer frequency: {0}", Stopwatch.Frequency);
Result:
Timer is high-resolution: True
Timer frequency: 2597705
This article (from 2005!) mentions a Frequency of 3579545, a million more than mine. This blog post mentions a Frequency of 3,325,040,000, which is insane.
Why is my Frequency so much comparatively lower? I'm on an i7 920 machine, so shouldn't it be faster?
3,579,545 is the magic number. That's the frequency in Hertz before dividing it by 3 and feeding it into the 8053 timer chip in the original IBM PC. The odd looking number wasn't chosen by accident, it is the frequency of the color burst signal in the NTSC TV system used in the US and Japan. The IBM engineers were looking for a cheap crystal to implement the oscillator, nothing was cheaper than the one used in every TV set.
Once IBM clones became widely available, it was still important for their designers to choose the same frequency. A lot of MS-DOS software relied on the timer ticking at that rate. Directly addressing the chip was a common crime.
That changed once Windows came around. A version of Windows 2 was the first one to virtualize the timer chip. In other words, software wasn't allowed to directly address the timer chip anymore. The processor was configured to run in protected mode and intercepted the attempt to use the I/O instruction. Running kernel code instead, allowing the return value of the instruction to be faked. It was now possible to have multiple programs using the timer without them stepping on each other's toes. An important first step to break the dependency on how the hardware is actually implemented.
The Win32 API (Windows NT 3.1 and Windows 95) formalized access to the timer with an API, QueryPerformanceCounter() and QueryPerformanceFrequency(). A kernel level component, the Hardware Adaption Layer, allows the BIOS to pass that frequency. Now it was possible for the hardware designers to really drop the dependency on the exact frequency. That took a long time btw, around 2000 the vast majority of machines still had the legacy rate.
But the never-ending quest to cut costs in PC design put an end to that. Nowadays, the hardware designer just picks any frequency that happens to be readily available in the chipset. 3,325,040,000 would be such a number, it is most probably the CPU clock rate. High frequencies like that are common in cheap designs, especially the ones that have an AMD core. Your number is pretty unusual, some odds that your machine wasn't cheap. And that the timer is a lot more accurate, CPU clocks have typical electronic component tolerances.
The frequence depends on the HAL (Hardware abstraction layer). Back in the pentium days, it was common to use the CPU tick (which was based on the CPU clock rate) so you ended up with really high frequency timers.
With multi-processor and multi-core machines, and especially with variable rate CPUs (the CPU clock slows down for low power states) using the CPU tick as the timer becomes difficult and error prone, so the writers of the HAL seem to have chosen to use a slower, but more reliable hardware clock, like the real time clock.
The Stopwatch.Frequency value is per second, so your frequency of 2,597,705 means you have more than 2.5 million ticks per second. Exactly how much precision do you need?
As for the variations in frequency, that is a hardware-dependent thing. Some of the most common hardware differences are the number of cores, the frequency of each core, the current power state of your cpu (or cores), whether you have enabled the OS to dynamically adjust the cpu frequency, etc. Your frequency will not always be the same, and depending on what state your cpu is in when you check it, it may be lower or higher, but generally around the same (for you, probably around 2.5 million.)
I think 2,597,705 = your processor frequency. Myne is 2,737,822. i7 930

Categories

Resources