How does XNA maintain a consistent and precise 60 FPS frame rate? Additionally, how does it maintain such precise timing without pegging the CPU at 100%?
While luke’s code above is theoretical right the used methods and properties are not the best choices:
As the precision of DateTime.Now is only about 30ms (see C# DateTime.Now precision and give or take 20ms) its use for high performance timing is not advisable (60 FPS are 16ms). System.Diagnostics.Stopwatch is the timer of choice for real time .NET
Thread.Sleep suffers the same precision/resolution problem and is not guaranteed to sleep for the specified time only
The current XNA FX seems to hook into the Windows message loop and execute its internal pre-update each step and calling Game.Update only if it the elapsed time since the last update matches the specified framerate (e.g. each 16ms for the default settings). If you want to really know how the XNA FX does the job Reflector is your friend :)
Random tidbit: Back in the XNA GameStudio 1.0 Alpha/Beta time frame there were quite a few blog posts about the “perfect WinForms game loop”, albeit I fail to find them now…
I don't know specifically how XNA does it but when playing around with OpenGL a few years ago I accomplished the same thing using some very simple code.
at the core of it i assume XNA has some sort of rendering loop, it may or may not be integrated with a standard even processing loop but for the sake of example lets assume it isn't. in this case you could write it some thing like this.
TimeSpan FrameInterval = TimeSpan.FromMillis(1000.0/60.0);
DateTime PrevFrameTime = DateTime.MinValue;
while(true)
{
DateTime CurrentFrameTime = DateTime.Now;
TimeSpan diff = DateTime.Now - PrevFrameTime;
if(diff < FrameInterval)
{
DrawScene();
PrevFrameTime = CurrentFrameTime;
}
else
{
Thread.Sleep(FrameInterval - diff);
}
}
In reality you would probably use something like Environment.Ticks instead of DateTimes (it would be more accurate), but i think that this illustrates the point. This should only call drawScene about 60 times a second, and the rest of the time the thread will be sleeping so it will not incur any CPU time.
games should run at 60fps but that doesn't mean that they will. That's actually an upper limit for a released game.
If you run a game in debug mode you could get much higher frames per second - for example a blank starter template on my laptop in debug mode runs well over 1,000fps.
That being said the XNA framework in a released game will do its best to run at 60fps - but the code you included in your project has the chance of lowering that performance. For example having something constantly fire the garbage collection you normally would see a dip in the fps of the game, or throwing some complex math in the update or draw methods - thus having them fire every frame..which would usually be a bit excessive. There are a number of things to keep in mind to keep your game as streamlined as possible.
If you are asking how the XNA framework makes that ceiling happen - that I cant really explain - but I can say that depending on how you layout your code - and what you can do can definitely negatively impact this number, and it doesnt always have to be CPU related. In the instance of garbage Collection its just the cleaning up of a RAM which may not show a spike in CPU usage at all - but could impact your FPS, depending on the amount of garbage and interval it has to run.
You can read all about how the XNA timer was implemented here Game timing in XNA Game Studio but basicly it would try and wiat 1/60 of a second before continuing the loop again, also note that update can be called multiple times before a render if XNA needs to "catch up".
Related
I am still working on my video game and my own game engine and I make a good progress. This weekend, I wanted to take some time to improve the performance of my game and to check for any memory leaks and so on. While most things look fine, I have an incredible high CPU load of around 45% (on an Intel i5 with four cores). First, I thought I had some very bad design in one of my modules, but even after removing all parts from the render process I still had a CPU load of around 40%!
This was my render loop after removing all my modules' render calls:
public void Run()
{
_logger.BeginFunction(this.ToString(), "Run");
RenderLoop.Run(Form, () =>
{
_deviceContext.ClearDepthStencilView(_depthView, SharpDX.Direct3D11.DepthStencilClearFlags.Depth, 1.0f, 0);
_deviceContext.ClearRenderTargetView(_renderTargetView, Color.Black);
_swapChain.Present(0, SharpDX.DXGI.PresentFlags.None);
});
OnApplicationClosing();
_logger.EndFunction();
}
So, as you can see almost nothing happens and I still got that 40% CPU load. I checked, if anything is running in the background by printing the current stack trace every 5 seconds. Nothing was running in this process, after I disabled all my game engine's modules.
Then I remembered I had a similar issue many years ago during my studies, when a calculation thread was in an endless loop. Since the loop was not slowed down, it was executed as fast as possible which caused a high CPU load. Remembering this, I added an ugly line at the end of my render loop above:
System.Threading.Thread.Sleep(10);
Et voilà, my CPU load goes down to around 10%, even with all my game engine's modules activated.
However, this is not an acceptable final solution for my game engine. I took some time looking online for SharpDX render loop examples, but I was not able to figure out how other people handle this problem.
Is there any way to avoid the high CPU load without slowing the render loop down with a thread sleep?
I'd appreciate any hints and help from you! :-)
Simply replace:
_swapChain.Present(0, SharpDX.DXGI.PresentFlags.None);
by
_swapChain.Present(1, SharpDX.DXGI.PresentFlags.None);
As stated in MSDN Swapchain:Present doc:
0 - The presentation occurs immediately, there is no synchronization.
1 through 4 - Synchronize presentation after the nth vertical blank.
Forgive me for this question, but I can't seem to find a good source of when to use which. Would be happy if you can explain it in simple terms.
Furthermore, I am facing this dilemma:
See, I am coding a simple application. I want it to show the elapsed time (hh:mm:ss format or something). But also, to be able to "speed up" or "slow down" its time intervals (i.e. speed up so that a minute in real time equals an hour in the app).
For example, in Youtube videos (* let's not consider the fact that we can jump to specific parts of the vid *), we see the actual time spent in watching that video on the bottom left corner of the screen, but through navigating in the options menu, we are able to speed the video up or down.
And we can actually see that the time gets updated in a manner that agrees with the speed factor (like, if you choose twice the speed, the timer below gets updated twice faster than normal), and you can change this speed rate whenever you want.
This is what I'm kinda after. Something like how Youtube videos measure the time elapsed and the fact that they can change the time intervals. So, which of the two do you think I should choose? Timer or StopWatch?
I'm just coding a Windows Form Application, by the way. I'm simulating something and I want the user to be able to speed up whenever he or she wishes to. Simple as this may be, I wish to implement a proper approach.
As far as I know the main differences are:
Timer
Timer is just a simple scheduler that runs some operation/method once in a while
It executes method in a separate thread. This prevents blocking of the main thread
Timer is good when we need to execute some task in certain time interval without blocking anything.
Stopwatch
Stopwatch by default runs on the same thread
It counts time and returns TimeSpan struct that can be useful in case when we need some additional information
Stopwatch is good when we need to watch the time and get some additional information about how much elapsed processor ticks does the method take etc.
This has already been covered in a number of other questions including
here. Basically, you can either have Stopwatch with a Speed factor then the result is your "elapsed time". A more complicated approach is to implement Timer and changing the Interval property.
We just developed an iOS game and users have been complaining that it drains their battery power. It plays at 60 frames-per-second and uses a proprietary gaming engine (written in C#). May one of those be the issue or are there other common factors that should be investigated first?
Apple have some guidelines on reducing power consumption in their iOS programming guide
Good place to get started on some tips.
Firstly, run the code through Instruments and see how it effects CPU usage (constant high CPU will drain the battery). Also, do you use any device features such as GPS or WIFI? These will drain the battery further.
Secondly, do you run any background processes when your app should suspend that might be eating away at battery?
You can keep track of any performance you enhance by checking device logs for power consumption, making a change and saving another log.
follow these instructions to accomplish this
There may be one simple answer, try running your game at 30, or maybe even as low as 24 FPS. Is there any REAL reason you need to be running it SO fast???
I stated 24 by the way as it is "technically" the fastest your eyes (for the majority of human beings) can detect.
In video, we try to go higher because there are artifacts that can be seen from the recording process, but because games have generated scenes, generally you dont NEED to go higher than 24.
A good first step would be to reduce the frame-rate to 30 FPS.
For any reasonable game, 60 FPS is overkill. At some point, your eyes just cannot tell the difference, unless the frame-rate is skipping. That rate occurs at about 24-30 FPS, and that's why It's most used for videos & gaming.
I would caution you though, if you do have a real-time game (especially one based on reflexes), that you do your game logic on another thread. If you do not, you could have flaw that other games have, for example:
Call Of Duty: Modern Warfare 3 has a major flaw in it's engine design in that the fire rate that your gun shoots is determined solely by the frame rate of the game, and not by a background thread.
This makes certain weapons more dangerous than others, because at a frame rate of about 60 FPS, they have an integer amount of time per shot.
So with that said, just try reducing your framerate. That alone is probably what's eating most of the battery.
In using Stopwatch.GetTimestamp() we find that if you record the return value and then continue calling it and comparing to the previous return value, it will eventually but unpredictably return a value less than the original.
Is this expected behavior?
The purpose of doing this in the production code is to have a microsecond accurate sytem time.
The technique involves calling DateTime.UtcNow and also calling Stopwatch.GetTimestamp() as originalUtcNow and originalTimestamp, respectively.
From that point forward, the application simply calls Stopwatch.GetTimestamp() and using Stopwatch.Frequency it calculates the difference from the originalTimestamp variable and then adds that difference to the originalUtcNow.
Then, Voila...an efficient and accurate microsecond DateTime.
But, we find that sometimes the Stopwatch.GetTimestamp() will return lower number.
It happens quite rarely. Our thinking is to simply "reset" when that happens and continue.
HOWEVER, it makes us doubt the accuracy of the Stopwatch.GetTimestamp() or suspect there is a bug in the .Net library.
If you can shed some light on this, please do.
FYI, based on the current timestamp value, the frequence, and the long.MaxValue it appears unlikely that it will roll over during our lifetime unless it's a hardware issue.
EDIT: We're now calculating this value "per thread" and then "clamping it" to watch for jumps between cores to reset it.
It's possible that you get the jump in time because your thread is jumping cores. See the "note" on this page: http://msdn.microsoft.com/en-us/library/ebf7z0sw.aspx
The behavior of the Stopwatch class will vary from system to system depending on hardware support.
See: http://msdn.microsoft.com/en-us/library/system.diagnostics.stopwatch.ishighresolution.aspx
Also, I believe the underlying equivalent win32 call (QueryPerformanceCounter) contains useful documentation: http://msdn.microsoft.com/en-us/library/windows/desktop/ms644904(v=vs.85).aspx
I don't know exactly regarding about running backwards (which sounds like a small change backwards), but I have so far experienced three times that the value of Stopwatch.GetTimestamp() can change so enormously that it causes overflow exceptions in some further calculations of form about like this:
(Stopwatch.GetTimestamp() - ProgramStartStopwatchTimestamp) * n
where n is some large value, but small enough that if the Stopwatch weren't jumping around enormously, then the program could run years without having overflow exception. Note also that these exceptions occurred many hours after the program started, so the issue is not just that the Stopwatch ran backwards a little bit immediately after start. It just jumped to totally different range, in whatever direction.
Regarding Stopwatch rolling over, in one of the above cases it (not the difference, but Stopwatch) obtained value of something a la 0xFF4? ???? ???? ????, so it jumped to a range that was very close to rolling over. After restarting the program multiple times, this new range was still consistently in effect. If that matters anymore considering the need to handle the jumps anyway...
If it was additionally necessary to determine which core the timestamp was taken on then it probably helps to know executing core number. For this end, there are functions called GetCurrentProcessorNumber (available since Server 2003 and Vista) and GetCurrentProcessorNumberEx (available since Server 2008 R2 and Windows 7). See also this question's answers for more options (including Windows XP).
Note that core number can be changed by the scheduler any time. But when one reads the core number before reading Stopwatch timestamp, and after, and the core number remained same, then perhaps one can assume that the Stopwatch read was also performed on this core...
To specifically answer the high-level question "How often does Stopwatch.GetTimestamp() roll over?", Microsoft's answer is:
Not less than 100 years from the most recent system boot, and potentially longer based on the underlying hardware timer used. For most applications, rollover isn't a concern.
I'm writing an application for a touch table using WPF and C#. (And, I'm not terribly familiar with WPF. At all.) We suspect we're not getting the framerate we're "requesting" from the API so I'm trying to write my own FPS calculator. I'm aware of the math to calculate it based on the internal clock, I just don't know how to access a timer within the Windows/WPF API.
What library/commands do I need to get access to a timer?
Although you could use a DispatcherTimer (which marshalls its ticks onto the ui thread, causing relativity problems), or a System.Threading.Timer (which might throw an exception if you try to touch any UI controls), i'd recommend you just use the WPF profiling tools :)
I think you're looking for the StopWatch. Just initialize it and reset it with each start of your iteration. At the end of an iteration, do your calculation.
First of all, are you aware that Microsoft provides a free diagnostic tool that will tell you the frame rate at which WPF is updating the screen? I guess if you're not convinced you're getting the framerate you're asking for, then perhaps you might not trust it, but I've found it to be a reliable tool. It's called Perforator, and it's part of the WPF Performance Suite, which you can get by following the instructions here: http://msdn.microsoft.com/library/aa969767
That's probably simpler than writing your own.
Also, how exactly are you "requesting" a frame rate? What API are you using? Are you using the Timeline's DesiredFrameRate property? If so, this is more commonly used to reduce the frame rate than increase it. (The docs also talk about increasing the frame rate to avoid tearing, but that doesn't really make sense - tearing is caused by presenting frames out of sync with the monitor, and isn't an artifact of slow frame rates. In any case, on Vista or Windows 7, you won't get tearing with the DWM enabled.) It's only a hint, and WPF does not promise to match the suggested frame rate.
As for the measurement technique, there are a number of ways you could go. If you're just trying to work out whether the frame rate is in the right ballpark, you could just increment a counter once per frame (which you'd typically do in an event handler for CompositionTarget.Rendering), and set up a DispatcherTimer to fire once a second, and have it show the value in the UI, and then reset the counter. It'll be somewhat rough and ready as DispatcherTimer isn't totally accurate, but it'll show you whether you've got 15fps when you were expecting 30fps, for example.
If you're trying to get a more precise view (e.g., you want to try to work out whether frames are being rendered constantly, or if you seem to be getting lost frames from time to time), then that gets a bit more complex. But I'll wait to see if Perforator does the trick for you before making more suggestions.
You want to either wrap the win32 timing calls you'd normally call (such as QueryPerformanceCounter), by using p/Invoke, or use something in .NET that already wraps them.
You could use DateTime.Ticks, but it's probably not high enough resolution. The Stopwatch class uses QueryPerformanceCounter under the covers.
If you want something that's reusable on a lot of systems, rather than a simple diagnostic, be warned about processor related issues w/ QPC and Stopwatch. See this question: Can the .NET Stopwatch class be THIS terrible?