Why does one SpriteBatch.Draw causes frame rate to drop 23 fps? - c#

I am having a major performance problem with a game I'm developing for Windows Phone 7 in C# XNA 4.0.
There's a lot of code going on, like collision, input, animations, physics and so on.
The frame rate has been set to 60fps, but is only running at 32fps. I have tried a lot of things, like disabling though stuff like collision detection, but nothing helped getting higher frame rate. Now randomly I discovered when disabling the drawing of the background, which is just a standard 480x800 sized image (Same as the Windows Phone Resolution), and uses the default method "spriteBatch.Draw(Textures.background, Vector2.Zero, Color.White)" the frame rate goes from 32 to 55 fps. I have also tried changing the texture to a plain white one, but that does not help either, and I have also tried moving the drawing of the background to another place in the code, but nothing changed either. I tried making a new project, and just having the background drawed, but the fps would be at 60 fps then as it should. I'm only having one SpriteBatch.Begin() and SpriteBatch.End(), where all the needed sprites are drawed inside. There's 256 Texture2Ds loaded into the game, which all is loaded at the beginning of the game. The game is a sidescroller, so the background needs to move to the left all the time, but even if I just set it to Vector2.Zero, it would still ruin the fps by -20fps. I hope anyone has a solution to this, or at least an idea of why this is happening.

If you have 256 individual Texture2Ds being used within the same SpriteBatch Begiin/End call it's not surprising that performance isn't optimal unless you are ordering the sprites by texture, which you likely are not for a platformer. All that texture switching within the same batch will cause a decrease in framerate - it's likely that the background image is just the straw that breaks the camels back for your particular game setup.
Have you tried combining those 256 separate images into a smaller number of Texture2Ds (i.e. using spritesheets or a texture atlas)? Here is an older link about how proper sprite sorting can affect performance

Related

UNITY3D How to do multiple animations on the same object at the same time?

Unity version: 5.6.2f1
Tech ring image
I have a tech ring separated into multiple parts (image is down below). I've animated the ring parts to rotate around the center at a different rate. This is my idle state on the first layer.
On the second layer I've animated the ring parts to elevate from it's location. To test this I've temporarily chose this as the first state after entry.
Only the first layer worked. I tried additive settings, synchronizing layers, but none of those worked. The synch worked on the state, but didn't actually play the animation clip, which is really weird.
The elevation animation should activate on trigger later, and I don't want the idle rotation to stop or change.
How can I pull this off?
Why not scripting?
Doing animations from script is slower, esspecially if I want to rotate and scale 8 different objects with different values. Also, the animation curves (which are making the animation smooth) would take quite a lot of time to implement.
SOLUTION
So, I figured out the problem. I didn't set weight for the animation layers.
Setting the weight of the second layer to 1 and making it additive solved the problem.

How much improvement can I expect with SharpDX over heavily optimized GDI code in C#/WinForms?

I've been working on a C#/GDI graphical app for a couple years. I've spent a lot of time optimizing the drawing code. I am drawing to the screen by invalidating a PictureBox control about 10 times a second, and by leveraging subsequent OnPaint event that occurs when Windows triggers it. The OnPaint event gives me access to the Graphics object via the PaintEventArgs param.
Per frame: I draw hundreds of lines, hundreds of rectangles, and I call the Graphics.DrawString() method hundreds of times as well.
I started putting together a SharpDX project in hopes I could draw more 2D elements, and draw faster to the screen. I set up 2 test projects that draw the same 2D elements on the screen using GDI and using SharpDX. I used a C# StopWatch object to detect how long it was taking to draw all the 2D elements. So far, I have not noticed any speed improvement when drawing with SharpDX. Both GDI and SharpDX average about 20millis per draw.
How much of a speed improvement should I expect by using SharpDX? And which portion of the rasterization is supposed to make it faster than GDI?
I worked on a Windows Forms app where the "rendering system" was pluggable, and I initially wrote two rendering systems: one in GDI+, and one in Managed DirectX (a .NET wrapper around DirectX, similar to SharpDX). Because I was doing a lot of drawing of images at arbitrary scales, Managed DirectX blew GDI+ out of the water for our use case. Drawing code that used 1-pixel-wide lines was also very fast in Managed DirectX. Thick lines were much slower than single-pixel lines, because they were actually rendered as triangle strips (which can be drawn quickly by the GPU) whose coordinates had to be calculated by the CPU, and the geometry was complicated by segment joints (which were rounded, if I remember right). Fortunately, in our application we didn't need to draw curves, but those would have to be approximated by small line segments (for single-pixel widths) and triangles (for anything thicker).
It's things like CPU-based approximation and triangulation that slow a Direct3D app down. 3D games use pre-calculated meshes and make use of vertex buffers on the GPU to avoid moving data back and forth from the CPU to GPU. I don't have any data comparing speeds between GDI+ and DirectX, but these are some things to consider.
Direct2D takes a bit of getting used to, but once you get it up and running properly I can promise that you will never look back. I used it to migrate a very large project which was based on DirectDraw with GDI+ Interop and saw a huge performance increase as well as better stability and a more satisfying development experience.
It has received a lot of negative press about performance, particularly when it was first introduced but if you hook it up to a DXGI swap chain (which is very easy to do) and keep your code sensible, the benefits will be most clear.
SharpDX is the right choice and it will only get faster in the near future, with SSE/SIMD driven primitives just around the corner.

XNA RenderTarget2D limitations of size

I have been using XNA to make a PC game and I am doing lighting with RenderTarget2D by drawing the world stuff in one and then drawing the lightmask stuff in another. Anyway I wasn't running into any problems at first using my 1920x1080 monitor, but I just upgraded to a 27" 2560x1440 S-IPS monitor and now I'm getting an error because my monitor resolution is set higher than 2048 and that's apparently the largest size a texture can be.
It's trying to draw a texture of size 2560 by 1440 so I need to find a way around that limitation. I suppose with large screens I could somehow split it into multiple RenderTarget2Ds. My issue with that is how I pass in the light mask:
drawMain();
drawLightMask();
ScreenManager.SpriteBatch.Begin(SpriteSortMode.Immediate, BlendState.AlphaBlend);
lightingEffect.Parameters["lightMask"].SetValue(lightMask);
lightingEffect.CurrentTechnique.Passes[0].Apply();
ScreenManager.SpriteBatch.Draw(mainScene, new Vector2(0, 0), Color.White);
ScreenManager.SpriteBatch.End();
I'm sure others have had to deal with this issue so maybe someone has some ideas on how I could approach this 2048 limitation?
Edit: Another thought.. perhaps when I detect that have a large resolution I just set it lower instead of default setting it to their current resolution. That way wouldn't really have to worry about it. The problem with this way is that in fullscreen mode it's changing their screen's resolution so when they exit the game it has to change resolutions again and is kind of messy. Then in borderless winowed mode the window is smaller but it doesn't seem to change your resolution.

XNA Window Scaling Performance

In my XNA game, I program and design the entire thing for 1920x1080 resolution and then scale scale and letterbox to fit the running system (XBox or PC).
This has been a great solution as it allows me to only ever worry about one resolution.
However, I'm now wondering if this will come back to bite me in the future as the game becomes more complex.
Since I'm having to scale everything with every draw (I scale SpriteBatch.Begin() with the scale factor only once, do all drawing, then call End()), is this going to have any detrimental effect on performance? I know the XBox already does this for XNA games for you when set to 720p natively (which, I actually am when running on XBox, it just gets the appropriate scaling factor).... so I can't imagine it's too bad, even for the PC.
The xbox does its scale in a specialized scaler in the video output - there is zero performance hit to your app.
The scale factor you pass in to SpriteBatch just translates the vertices of your sprite. Its done with a factor of 1 even if you don't pass in a scale so there is no extra load there.
With different screen sizes there will be different fill rates (more or less pixels) and texture lookups will be different too so that will show some variances.
Just using a 1028x720 backbuffer is the safest to do. The xbox can hardware scale that into ANY resolution or ratio without a single line of code or any perf issues. It will letter box properly and the hardware scaler is very good quality. You cannot claim 1080p support is really the only downside.
If you chose 1020x1080 as your back buffer note that the hardware scaler cannot scale that down to 480i and WILL fail in peer review (well it should if anyone notices)
On WIndows the easiest thing is probably to draw everything to a RenderTarget and then just scale the whole darn thing on one go at the end. See the SpaceWar starterkit for the few lines of code it takes to do that. (All inside #if !XBOX so you dont waste cycles doing it on the 360)
This shouldn't cost you. The scaling is done by he video card.
However, if this does worry you, another option to handle different resolutions is to change the render target to an off screen buffer, render to that, and then draw a square on the screen with the buffer as the texture. More pixels are rendered, but you don't need to do a transform on each vertex.

How to eliminate tearing from animation?

I'm running an animation in a WinForms app at 18.66666... frames per second (it's synced with music at 140 BPM, which is why the frame rate is weird). Each cel of the animation is pre-calculated, and the animation is driven by a high-resolution multimedia timer. The animation itself is smooth, but I am seeing a significant amount of "tearing", or artifacts that result from cels being caught partway through a screen refresh.
When I take the set of cels rendered by my program and write them out to an AVI file, and then play the AVI file in Windows Media Player, I do not see any tearing at all. I assume that WMP plays the file smoothly because it uses DirectX (or something else) and is able to synchronize the rendering with the screen's refresh activity. It's not changing the frame rate, as the animation stays in sync with the audio.
Is this why WMP is able to render the animation without tearing, or am I missing something? Is there any way I can use DirectX (or something else) in order to enable my program to be aware of where the current scan line is, and if so, is there any way I can use that information to eliminate tearing without actually using DirectX for displaying the cels? Or do I have to fully use DirectX for rendering in order to deal with this problem?
Update: forgot a detail. My app renders each cell onto a PictureBox using Graphics.DrawImage. Is this significantly slower than using BitBlt, such that I might eliminate at least some of the tearing by using BitBlt?
Update 2: the effect I'm seeing is definitely not flicker (which is different from tearing). My panel is double-buffered, sets the control styles for AllPaintingInWmPaint, UserPaint, OptimizedDoubleBuffer etc., overrides onPaintBackGround and so on. All these are necessary to eliminate flicker, but the tearing problem remains. It is especially pronounced when the animation has very fast-moving objects or objects that change from light to dark very quickly. When objects are slow-moving and don't change color rapidly, the tearing effect is much less noticeable (because consecutive cels are always very similar to each other).
Tearing occurs when your image update is not in sync with the refresh rate of the monitor. The monitor shows part of the previous image, part of the new image. When the objects in the image move fast, the effect is quite noticeable.
It isn't fixable in Windows Forms, you can't get to the video adapter's v-sync signal. You can in a DirectX app.
I tried the double buffering idea on the project I'm working on at the moment, but I didn't get very good results with it. In the end, I created the following:
A System.Drawing.Bitmap for my offscreen buffer. Decode the animation into this bitmap.
A UserControl the same size as the image in (1) and where the OnPaintBackground method was empty (no drawing, no call to base class) and the OnPaint did a Graphics.DrawImage to copy the offscreen image to the screen.
Now, you've got a weird animation rate so the tearing is almost certainly to do with a mismatch between screen update rate and screen refresh rate. You are updating the screen midway through the screen's refresh so the screen is drawing the old frame at the top of the screen and the new frame at the bottom of the screen. If you can synchronise the frame rate with the display refresh rate, the tearing should disappear.
You would better to use directx (or opengl) for such tasks. But if you want to use only winforms use DoubleBuffered property.
Double buffer it.
You can enable double buffering using windows styles or, whats probably easier, is to draw to a picture from offscreen and then swap them.
If this doesnt work then the best thing to do is bitblit and double buffer.
Essentially its the same.
Have a reference to two bitmaps, one is the screen, the other is the buffer.
You draw to the buffer first, then blit that entire thing to the screen.
This way you only ever write live data to the buffer. The sceen simply shows something you made earlier (blue peter style)
Tearing is an artifact from a frame being drawn on top of another. The only safe ways of avoiding it is to a) wait from vsync or b) draw behind the beam (this is rather tricky). Double buffer alone doesn't guarantee against tearing since you can have double buffer but still draw having the vsync off. Some cards might also have vsync wait option "forced off". You need to check the documentation regarding vsync and how to check where it is. This is the only safe way to do it. Also, keep in mind that this will lock your framerate.

Categories

Resources