Texture related memory leak in XNA 4.0 / C# - c#

I found a memory leak in my XNA 4.0 application written in C#. The program needs to run for a long time (days) but it runs out of memory over the course of several hours and crashes. Opening Task Manager and watching the memory footprint, every second another 20-30 KB of memory is allocated to my program until it runs out. I believe the memory leak occurs when I set the BasicEffect.Texture property because that is the statement that finally throws the OutOfMemory exception.
The program has around 300 large (512px) textures stored in memory as Texture2D objects. The textures are not square or even powers of 2 - e.g. can be 512x431 - one side is always 512px. These objects are created only at initialization, so I am fairly confident it is not caused by creating/destroying Texture2D objects dynamically. Some interface elements create their own textures, but only ever in a constructor, and these interface elements are never removed from the program.
I am rendering texture mapped triangles. Before each object is rendered with triangles, I set the BasicEffect.Texture property to the already created Texture2D object and the BasicEffect.TextureEnabled property to true. I apply the BasicEffect in between each of these calls with BasicEffect.CurrentTechnique.Passes[0].Apply() - I'm aware that I'm calling Apply() twice as much as I should, but the code is wrapped inside of a helper class that calls Apply() whenever any property of BasicEffect changes.
I am using a single BasicEffect class for the entire application and I change its properties and call Apply() any time I render an object.
First, could it be that changing the BasicEffect.Texture property and calling Apply() so many times is leaking memory?
Second, is this the proper way to render triangles with different textures? E.g. using a single BasicEffect and updating its properties?
This code is taken from a helper class so I've removed all the fluff and only included the pertinent XNA calls:
//single BasicEffect object for entire application
BasicEffect effect = new BasicEffect(graphicsDevice);
// loaded from file at initialization (before any Draw() is called)
Texture2D texture1 = new Texture2D("image1.jpg");
Texture2D texture2 = new Texture2D("image2.jpg");
// render object 1
if(effect.Texture != texture1) // effect.Texture eventually throws OutOfMemory exception
effect.Texture = texture1;
effect.CurrentTechnique.Passes[0].Apply();
effect.TextureEnabled = true;
effect.CurrentTechnique.Passes[0].Apply();
graphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertices1, 0, numVertices1, indices1, 0, numTriangles1);
// render object 2
if(effect.Texture != texture2)
effect.Texture = texture2;
effect.CurrentTechnique.Passes[0].Apply();
effect.TextureEnabled = true;
effect.CurrentTechnique.Passes[0].Apply();
graphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertices2, 0, numVertices2, indices2, 0, numTriangles2);
It's an XNA application, so 60 times per second I call my Draw method, which renders all my various interface elements. This means that I could be drawing between 100-200 textures per frame, and the more textures I draw, the faster I run out of memory, even though I am not calling new anywhere in the update/draw loops. I am more experienced with OpenGL than DirectX, so clearly something is happening behind the scenes that is creating unmanaged memory I'm not aware of.

My only suggestion to you that I can come up with, is to group your textures into atlases rather than one by one. It will speed up your rendering time and reduce the load on the GPU, provided that you are rendering a massive amount of textures per frame. The reason for this, is because the GPU will not have to swap textures to render so often which is an expensive operation. I am using my knowledge of OpenGL, but my guess is that XNA is based on DirectX and I am assuming they load textures in a similar manner (unless you are using Monogame which lets you use OpenGL)
That said, you did not provide very much information. The memory leak can be coming from the texture switching, but it can also be coming from somewhere else. Most of your memory is occupied with textures, which is probably why you are getting a crash there rather than somewhere else. I am guessing a few things are happening here:
The garbage collector is not working fast enough to pick up all the RAM being allocated inside the rendering functions
You have a memory leak somewhere else in your code, and it is showing up here instead
Once again, it is difficult to figure out what is on here provided how little I know about your code. But to my best ability, I have a few suggestions for you:
Run through your code and look at how you are referecing things. Make sure that you don't have any temporary references inside of classes and structs. If you use something and pass it around to difference classes and consider it later as "discarded", there is good chance that someone is still holding onto that object, preventing it from being deleted
Do a search for all the "new" keywords you have in the solution. If you have something that constantly gets using the "new" keyword, this can be a huge memory leak as it's creating a massive ammount of objects in the heap. The garbage collector SHOULD be picking them up, but I wouldn't say I trust the garbage collector very much. Worst case that garbage collector is not coming around often enough to deal with this memory leak.
Find ways to reduce the sizes of textures. Atlasing is a solution that will reduce the overhead of packaging each texture in it's own Texture2D. This can be a bit more work because you will have to work in a system of swapping textures from the same file, but in your case that very well maybe worth it.
If you are convinced that there is an issue within XNA, try a different implementation called Monogame. It follows the exact same structure as XNA but is maintained by a community. As a result, the guts of the libraries you are using have been rewritten and there is a good chance that whatever is destroying your heap has been fixed.
My advice to you? If you are really familiar with OpenGL and what you are doing is fairly simple, then I would check out OpenTK. it is a thin linker layer that takes OpenGL and "ports" it into C#. All the commands are the exact same and you have the flexibility of using the entire .NET library for all the extra fluff.
I hope this helps!

Related

Do I have to keep creating a Graphics object

I am an old delphi programmer, I am used to creating objects and using them entire time for efficient memory usage. But in c# (maybe all the tutorials I've ever seen), you are creating stuffs with new every time (thanks to garbage collector!!, let me do the coding)..
Anyway, I am trying to create a designing software which has lots of drawing.
My question is: do I have to create a graphics object, or use the protected override void OnPaint(PaintEventArgs e) e.Graphics every painting event.. because when I create a graphic object and then resize the control that I draw on, the graphic object that I created, has that clipping problem and only draws old rectangle region..
thanks
Caching objects makes sense when the object is expensive to create, cheap to store and relatively simple to keep updated. A Graphics object is unique in that none of these conditions are true:
It is very cheap to create, takes well less than a microsecond.
It is very expensive to store, the underlying device context is stored in the desktop heap of a session. The number of objects that can be stored is small, no more than 65535. All programs that run in the session share that heap.
It is very hard to keep updated, things happen behind your back that invalidates the device context. Like the user or your program changing the window size, invalidating the Graphics.ClipBounds property. You are wasting the opportunity to use the correct Graphics object, the one passed to you in a Paint event handler. Particularly a bug factory when you use double-buffering.
Caching a Graphics object is a bug.
If you want to draw on the surface always use the Graphics object from the Paint event!
If you want to draw into a Bitmap you create a Graphics object and use it as long as you want.
For the Paint event to work you need to collect all drawing in a List of graphic actions; so you will want to make a nice class to store all parameters needed.
In your case you may want to consider a mixed approach: Old graphic actions draw into a bitmap, which is the e.g. BackgroundImage or Image of your control
Current/ongoing drawing are done on the surface. This amounts to using the bitmap as a cache, so you don't have to redraw lots of actions on every little change etc
This is closely related to your undo/redo implementation. You could set a limit and draw those before into a Btimap and those after onto the surface..
PS: You also should rethink your GC attitude. It is simple, efficient and a blessing to have around. (And, yes, I have done my share of TP&Delphi, way back when they were affordable..) - Yes, we do the coding, but GC is not about coding but about house keeping. Boring at best.. (And you can always design to avoid it, but not with a Graphics object in a windows system.)
A general rule for every class that implements IDisposable is to Dispose() it, as soon as possible. Make sure you know about the using(...){} statement.
For drawing in WinForms (GDI+) the best practice is indeed to use the Graphics object from PaintEventArgs. And because you didn't create that one, do not Dispose() it. Don't stash it either.
I have to completely disagree with other more experienced members here who say it's no big deal or in fact better to recreate the Graphics object over and over.
The HDC is a pointer to a HDC__ struct, which is a struct with one member, "int unused". It's an absolute waste and stupidity to create another instance/object every time drawing needs to be done. The HDC is NOT large, it's either 4 or 8 bytes, and the struct it points to is in nearly all cases 4 bytes. Furthermore, on the point that one person made, it doesn't help that the graphics object be made with the "static" keyword at the beginning of the WndProc() before the switch, because the only way to give the Graphics object a device context or handle to paint on is by calling its constructor, so "static" does nothing to save you from creating it over and over again.
On top of that Microsoft recommends that you create a HDC pointer and assign it to the same value PAINTSTRUCT already has, every, single WM_PAINT message it sends.
I'm sorry but the WinAPI in my opinion is very bad. Just as an example, I spent all day researching how to make a child WS_EX_LAYERED window, to find out that in order to enable Win 8 features one has to add code in XML with the OS's ID number to the manifest. Just ridiculous.

How to prevent duplicate copies of a loaded texture in XNA Game Studios (Design)

I have zones in my game world that are used to partition renderable areas based on their visibility and proximity to the user.
For simplicity, as such.
class Zone
{
Textur2D _texture;
public Zone(...)
{
_texture = Content.Load<Texture2D>("Textures\\a");
}
}
Currently, this structure has _texture as a member of the Zone class. Now, if I need to render, say, 20 zones. I will have to load 20 copies of what could possibly be the same texture.
From my point of view, this seems very inefficient memory wise.
I looked into the ContentManager (called Content above) class on MSDN and around the net, it looks like it has an Unload() feature. Leading me to believe that it is a centralized source for the texture data, meaning that all of the loaded texture's in my hypothetical 20 zones all point to the same reference texture and will all dispose when I call Unload().
Is this the case? Or should I implement a ContentCache to load all of these textures and have my zones point to that centrally loaded texture that they all share?
Yes, ContentManager handles caching of textures and prevents needless reloading. You can find more information here: https://stackoverflow.com/a/9870420/1141432

D3DImage OutOfMemoryException when calling SetBackBuffer

I'm using D3DImage to show images rendered using Direct3D. The Direct3D rendering needs to be happening in its own thread, while the GUI thread takes a surface when it wants and puts it on the screen using D3DImage.
At first I tried to do this using a single D3D render target, however even with locks in place, I had serious tearing, i.e. the rendering thread was overwriting the surface as WPF was copying it on its frontbuffer. It seems like WPF is very unpredictable as to when it copies the data (i.e. it's not on D3DImage.Unlock() or even on the next D3DImage.Lock(), as the documentation suggests).
So now what I'm doing is I have two render targets, and every time WPF displays a frame, it asks the rendering thread to swap its targets. So I'm always rendering into the target WPF isn't using.
This means that on each graphical update of the window, I do something like
m_d3dImage.Lock();
m_d3dImage.SetBackBuffer(D3DResourceType.IDirect3DSurface9, m_d3dRenderer.OutputSurface);
m_d3dImage.Unlock();
m_d3dRenderer.SwapSurfaces();
where OutputSurface is an IntPtr that points to the D3D render target we're not currently rendering to, and SwapSurfaces just swaps the two surface pointers and calls IDirect3DDevice9::SetRenderTarget with the one we'll use to render next.
EDIT: as requested, here is the code of SwapSurfaces():
var temp = m_renderingSurface;
m_renderingSurface = m_outputSurface;
m_outputSurface = temp;
m_d3dDevice.SetRenderTarget(0, m_renderingSurface);
Where m_renderingSurface and m_outputSurface are the two render targets (SharpDX.Direct3D9.Surface), and m_d3dDevice is the DeviceEx object.
This works beautifully, i.e. no tearing, however after a few seconds I get an OutOfMemoryException, and Direct3D has the following debug output:
Direct3D9: (ERROR) :Invalid iBackBuffer parameter passed to GetBackBuffer
Direct3D9: (ERROR) :Error during initialization of texture. CreateTexture failed.
Direct3D9: (ERROR) :Failure trying to create a texture
Direct3D9: (ERROR) :Error during initialization of texture. CreateTexture failed.
Direct3D9: (ERROR) :Failure trying to create a texture
MIL FAILURE: Unexpected HRESULT 0x8876017c in caller: CInteropDeviceBitmap::Present D3D failure
Direct3D9: (WARN) :Alloc of size 1577660 FAILED!
Direct3D9: (ERROR) :Out of memory allocating memory for surfaces.
Direct3D9: (ERROR) :Failure trying to create offscreen plain surface
I've found a related topic here where the proposed solution was to call D3DImage.SetBackBuffer() and pass IntPtr.Zero, however I've added this just before the existing call and it didn't solve the issue. I also tried calling Lock() and Unlock() around the SetBackBuffer(... IntPtr.Zero), and that didn't solve the issue either.
At this point I'm wondering if there's a bug in D3DImage or if I should use a different approach altogether. Could I replace my 2 render targets with a D3D swap chain instead, would that allow me to stop having to call SetBackBuffer with a different pointer all the time? I'm a newbie with Direct3D.
EDIT: I looked in the code of D3DImage.SetBackBuffer() using .NET Reflector, and it's creating an InteropBitmap every time. It doesn't do anything in particular for IntPtr.Zero. Since I'm calling this many times per second, perhaps the resources don't have the time to be freed. At this point I'm thinking of using 2 different D3DImages and alternating their visibility to avoid having to call their SetBackBuffer() all the time.
Thanks.
It appears that D3DImage creates a new Direct3D texture every time you set its backbuffer pointer to something different. This is eventually cleaned, but setting it 30 times per second like what I was doing doesn't leave it enough time, and is anyway a big performance killer. The approach I went for was to create several D3DImages, each with its own surface, put them all on top of each other and toggle their Visibility property so only one shows at a time. This seems to work very well and doesn't leak any memory.
I use a D3DImage to display images at a rapid framerate and ran into this same problem. I found out that D3DImage does create a new texture if you set the backbuffer pointer to a different pointer. But if you only change the backbuffer pointer's content (and not the pointer's address) it will not create a new texture.

List<T>.AddRange is causing a brief delay

I have a list of entities which implement an ICollidable interface. This interface is used to resolve collisions between entities. My entities are thus:
Players
Enemies
Projectiles
Items
Tiles
On each game update (about 60 t/s), I am clearing the list and adding the current entities based on the game state. I am accomplishing this via:
collidableEntities.Clear();
collidableEntities.AddRange(players);
collidableEntities.AddRange(enemies);
collidableEntities.AddRange(projectiles);
collidableEntities.AddRange(items);
collidableEntities.AddRange(camera.VisibleTiles);
Everything works fine until I add the visible tiles to the list. The first ~1-2 seconds of running the game loop causes a visible hiccup that delays drawing (so I can see a jitter in the rendering). I can literally remove/add the line that adds the tiles and see the jitter occur and not occur, so I have narrowed it down to that line.
My question is, why? The list of VisibleTiles is about 450-500 tiles, so it's really not that much data. Each tile contains a Texture2D (image) and a Vector2 (position) to determine what is rendered and where. I'm going to keep looking, but from the top of my head, I can't understand why only the first 1-2 seconds hiccups but is then smooth from there on out.
Any advice is appreciated.
Update
I have tried increasing the initial capacity to an approximate amount of elements, but no difference was observed.
Update
As requested, here is the code for camera.VisibleTiles
public List<Tile>
{
get { return this.visibleTiles; }
}
Whenever you have a performance issue like this, the correct response is not to guess at what possible causes might be; the correct response is to profile the application. There are a number of options available; I'm sure you can find suggestions here on StackOverflow.
However, I'm feeling lucky today, so I'm going to ignore the advice I just gave you and take a wild guess. I will explain my reasoning, which I hope will be helpful in diagnosing such problems in the future.
The fact that the problem you describe only happens once and only at the beginning of gameplay leads me to believe that it's not anything inherent to the functionality of the list itself, or of the collision logic. This is assuming, of course, that the number of objects in the list is basically constant across this period of time. If it was caused by either of these, I would expect it to happen every frame.
Therefore I suspect the garbage collector. You're likely to see GC happening right at the beginning of the game-in other words, right after you've loaded all of your massive assets into memory. You're also likely to see it happen at seemingly random points in the code, because any object you allocate can theoretically push it over the edge into a collection.
My guess is this: when you load your game, the assets you create are generating a large amount of collection pressure which is nonetheless not sufficient to trigger a collection. As you allocate objects during the course of gameplay (in this case, as a result of resizing the list), it is increasing the collection pressure to the point where the GC finally decides to activate, causing the jitter you observe. Once the GC has run, and all of the appropriate objects have been relegated to their correct generations, the jitter stops.
As I said, this is just a guess, albeit an educated one. It is, however, simple to test. Add a call to GC.Collect(2) prior to entering your render loop. If I'm right, the jitter should go away.
If you want to be more thorough than that-and I highly recommend it-Microsoft provides a tool which is useful for debugging memory issues, the CLR Profiler. This tool will show you exactly when collections are occurring and why. It's very useful for XNA development.
1-2 seconds may be caused by GC adjusting sizes of each generation. Look at corresponding perf counters and see if there is large number of Gen1/2 collections in first seconds (minimizing GC is useful goal for games)
Additional random guess: all your objects are struct for whatever reason. And very large. So copying them takes long time. (would not explain smoothing out after first seconds)
Notes:
instead of merging items in single list consider creating iterator that will avoid all copying.
get profiler
learn to look at perf counters for your process. Interesting categories are memory, GC, CPU, handlers.
You can use an overloaded constructor for List<T> to initialize the list with a predefined size:
var collidableEntities = new List<object>(500);

XNA Lags when window has focus

I'm developing for xna game studion using XNA 3.1, and I've noticed a problem with some games, where they lag despite the system having plenty of resources to handle them, along with an inexplicable excess of processor usage. When the window from the game is in focus, process #1 (in task manager) goes to 100% usage, and the game shows signs of minor lag (largely notable when sound effects are repeated in sequence). When the game loses window focus, it continues to draw and update at real time, but the process usage decreases, and the lag disappears.
I have tested this with various games, and the results remain the same, proving that it has nothing to do with my code or code efficiency.
Is this a problem isolated to Xna 3.1, and is there fix for it? Or do I just have to switch to 4.0 and hope my games don't use anything that isn't backwards compatible?
The problem may occur because of garbage collector. Everytime garbage collector runs, the frame rate may drop for a second or two, though on Windows it shouldnt be a problem.
Add this line to your code and see how much heap memory is generated. Every time the value goes down, garbage collector is ran.
SpriteBatch.DrawInt64(FONT, GC.GetTotalMemory(false) / 1000 /* in kilobytes */, new Vector2(5, 30), Color.White, 0f);
SpriteBatch.DrawInt64 is a SpriteBatch extension which doesnt generate garbage on int, long etc. You can alternatively just use SpriteBatch.DrawString(..., (GC.GetTotalMemory(false) / 1000).ToString(), ... )
SpriteBatchExtensions.cs : http://pastebin.com/z9aB7zFH
XNA, from my experience, runs up to 60 frames per second while in focus, and at about 20 frames per second when out of focus. IF however, you have set IsFixedTimeStep = false; the game will run as fast as it possibly can when the process is in focus. With my game, on my machine, it runs at about 500-700 fps. This is also tied in to the number of Update() calls that occur. So my game is also updating 500-700 times per second.
My bet is that you have disabled the fixed timestep, and the massive number of Update and Draw calls are consuming 100% of your core and it is messing with your music. I would recommend removing the line IsFixedTimeStep = false; if it's there. If that line does not exist in your code, this is not the problem, although I would bet that your Update or Draw is doing more work than it should be.
Just noticed this in my own game, I have a Console.WriteLine statement (for debugging) in my update loop, this causes lots of lag.
XNA add a sleep when the window is not in focus!
I solved this problem some time ago overriding Game class and changing the way the Game class understand its form is active or not.
There is not way, as much as i know, to disable this behaviour without modifying Game class behaviour with code.
In particular, the way i found some time ago is a real hack\quirk!
Very unclean solution, but the only way i found.
public class MyGame
{
private MethodInfo pActivate;
public MyGame()
{
// We need to access base HostActivate method, that unfortunally, is private!
// We need to use reflection then, of course this method is an hack and not a real solution!
// Ask Microsoft for a better implementation of their class!
this.pActivate = typeof(Game).GetMethod("HostActivated", BindingFlags.NonPublic | BindingFlags.Instance);
}
protected sealed override void OnDeactivated(object sender, EventArgs args)
{
base.OnDeactivated(sender, args);
// Ok, the game form was deactivated, we need to make it believe the form was activated just after deactivation.
if (!base.Active)
{
// Force activation by calling base.HostActivate private methods.
this.pActivate.Invoke(this, new object[] { sender, args });
}
}
}

Categories

Resources