I'm using D3DImage to show images rendered using Direct3D. The Direct3D rendering needs to be happening in its own thread, while the GUI thread takes a surface when it wants and puts it on the screen using D3DImage.
At first I tried to do this using a single D3D render target, however even with locks in place, I had serious tearing, i.e. the rendering thread was overwriting the surface as WPF was copying it on its frontbuffer. It seems like WPF is very unpredictable as to when it copies the data (i.e. it's not on D3DImage.Unlock() or even on the next D3DImage.Lock(), as the documentation suggests).
So now what I'm doing is I have two render targets, and every time WPF displays a frame, it asks the rendering thread to swap its targets. So I'm always rendering into the target WPF isn't using.
This means that on each graphical update of the window, I do something like
m_d3dImage.Lock();
m_d3dImage.SetBackBuffer(D3DResourceType.IDirect3DSurface9, m_d3dRenderer.OutputSurface);
m_d3dImage.Unlock();
m_d3dRenderer.SwapSurfaces();
where OutputSurface is an IntPtr that points to the D3D render target we're not currently rendering to, and SwapSurfaces just swaps the two surface pointers and calls IDirect3DDevice9::SetRenderTarget with the one we'll use to render next.
EDIT: as requested, here is the code of SwapSurfaces():
var temp = m_renderingSurface;
m_renderingSurface = m_outputSurface;
m_outputSurface = temp;
m_d3dDevice.SetRenderTarget(0, m_renderingSurface);
Where m_renderingSurface and m_outputSurface are the two render targets (SharpDX.Direct3D9.Surface), and m_d3dDevice is the DeviceEx object.
This works beautifully, i.e. no tearing, however after a few seconds I get an OutOfMemoryException, and Direct3D has the following debug output:
Direct3D9: (ERROR) :Invalid iBackBuffer parameter passed to GetBackBuffer
Direct3D9: (ERROR) :Error during initialization of texture. CreateTexture failed.
Direct3D9: (ERROR) :Failure trying to create a texture
Direct3D9: (ERROR) :Error during initialization of texture. CreateTexture failed.
Direct3D9: (ERROR) :Failure trying to create a texture
MIL FAILURE: Unexpected HRESULT 0x8876017c in caller: CInteropDeviceBitmap::Present D3D failure
Direct3D9: (WARN) :Alloc of size 1577660 FAILED!
Direct3D9: (ERROR) :Out of memory allocating memory for surfaces.
Direct3D9: (ERROR) :Failure trying to create offscreen plain surface
I've found a related topic here where the proposed solution was to call D3DImage.SetBackBuffer() and pass IntPtr.Zero, however I've added this just before the existing call and it didn't solve the issue. I also tried calling Lock() and Unlock() around the SetBackBuffer(... IntPtr.Zero), and that didn't solve the issue either.
At this point I'm wondering if there's a bug in D3DImage or if I should use a different approach altogether. Could I replace my 2 render targets with a D3D swap chain instead, would that allow me to stop having to call SetBackBuffer with a different pointer all the time? I'm a newbie with Direct3D.
EDIT: I looked in the code of D3DImage.SetBackBuffer() using .NET Reflector, and it's creating an InteropBitmap every time. It doesn't do anything in particular for IntPtr.Zero. Since I'm calling this many times per second, perhaps the resources don't have the time to be freed. At this point I'm thinking of using 2 different D3DImages and alternating their visibility to avoid having to call their SetBackBuffer() all the time.
Thanks.
It appears that D3DImage creates a new Direct3D texture every time you set its backbuffer pointer to something different. This is eventually cleaned, but setting it 30 times per second like what I was doing doesn't leave it enough time, and is anyway a big performance killer. The approach I went for was to create several D3DImages, each with its own surface, put them all on top of each other and toggle their Visibility property so only one shows at a time. This seems to work very well and doesn't leak any memory.
I use a D3DImage to display images at a rapid framerate and ran into this same problem. I found out that D3DImage does create a new texture if you set the backbuffer pointer to a different pointer. But if you only change the backbuffer pointer's content (and not the pointer's address) it will not create a new texture.
Related
I have a specific way of generating meshes and structuring my scene that makes occlusion culling very straight forward and optimal. Now all I need is to know how to actually show or hide a mesh efficiently using the ECS hybrid renderer. I considered changing the layer to a hidden layer in the RenderMesh component but the RenderMesh component is an ISharedComponentData and so does not support jobification or burst. I saw the Unity BatchRendererGroup API and it looked promising with its OnPerformCulling callback but I don't know if it is possible to hook into the HybridRenderSystem's internal BatchRenderGroup. I also saw the DisableRendering IComponentData tag that I guess disables an entities rendering. However, again, this can only be done from the main thread. I can write my own solution to render meshes using Graphics.DrawMesh or something like it, but I would prefer to integrate it natively with HybridRenderer in order to also cull meshes that are not related to my procedural meshes.
Is any of this possible? What is the intended use?
I'm not sure it's the best option but you can maybe try parallel command buffer:
var ecb = new EntityCommandBuffer( Allocator.TempJob );
var cmd = ecb.AsParallelWriter();
/* job 1 executes with burst & cmd adds/removes Disabled or DisableRendering tags */
// (main thread) job 2 executes produced commands:
Job
.WithName("playback_commands")
.WithCode( () =>
{
ecb.Playback( EntityManager );
ecb.Dispose();
}
).WithoutBurst().Run();
There is another way of hiding/showing entities. But it requires you to group adjacent entities in chunks spatially (you're probably doing that already). Then you will be occluding not specific entities one by one but entire chunks of them (sectors of space). It's possible thanks to fabulous powers of:
chunk component data
var queryEnabled = EntityManager.CreateEntityQuery(
ComponentType.ReadOnly<RenderMesh>()
, ComponentType.Exclude<Disabled>()
);
queryEnabled.SetSharedComponentFilter( new SharedSector {
Value = new int3{ x=4 , y=1 , z=7 }
} );
EntityManager.AddChunkComponentData( queryEnabled , default(Disabled) );
// EntityManager.RemoveChunkComponentData<Disabled>( queryDisabled );
public struct SharedSector : ISharedComponentData
{
public int3 Value;
}
The answer is that you can't and you shouldn't! Unity Hybrid rendering gets its speed by laying out data in sequence. If there was a boolean for each mesh that allowed you to show or hide the mesh Unity would still have to evaluate it which I guess is not in the their design philosophy. This whole design philosophy in general did not work out for me as I found out.
My world is made up of chunks of procedurally generated terrain meshes (think minecraft but better ;)) The problem with this is that each chunk has its own RenderMesh with a unique mesh... meaning that each chunk gets its own... chunk... in memory xD. Which, as appropriate as that sounds, is extremely inefficient. I decided to abandon Hybrid ECS all together and use good old game objects. With this change alone I saw a performance boost of 4x (going from 200 to 800fps). I just used the MeshRenderer.enabled property in order to efficiently enable and disable rendering. To jobify this I simply stored an array of the mesh bounds and a boolean for if it is visibile or not. This array I could then evaluate in a job and spit back out an index list of all the chunks that needed their visibility changed. This leaves only setting a few boolean values for the main thread which is not very expensive at all... It is not the ECS friendly solution I was looking for but from the looks of it, ECS was not exactly my friend here. Having unique meshes for each section of my world was clearly not the intended use case of Hybrid ECS.
I am an old delphi programmer, I am used to creating objects and using them entire time for efficient memory usage. But in c# (maybe all the tutorials I've ever seen), you are creating stuffs with new every time (thanks to garbage collector!!, let me do the coding)..
Anyway, I am trying to create a designing software which has lots of drawing.
My question is: do I have to create a graphics object, or use the protected override void OnPaint(PaintEventArgs e) e.Graphics every painting event.. because when I create a graphic object and then resize the control that I draw on, the graphic object that I created, has that clipping problem and only draws old rectangle region..
thanks
Caching objects makes sense when the object is expensive to create, cheap to store and relatively simple to keep updated. A Graphics object is unique in that none of these conditions are true:
It is very cheap to create, takes well less than a microsecond.
It is very expensive to store, the underlying device context is stored in the desktop heap of a session. The number of objects that can be stored is small, no more than 65535. All programs that run in the session share that heap.
It is very hard to keep updated, things happen behind your back that invalidates the device context. Like the user or your program changing the window size, invalidating the Graphics.ClipBounds property. You are wasting the opportunity to use the correct Graphics object, the one passed to you in a Paint event handler. Particularly a bug factory when you use double-buffering.
Caching a Graphics object is a bug.
If you want to draw on the surface always use the Graphics object from the Paint event!
If you want to draw into a Bitmap you create a Graphics object and use it as long as you want.
For the Paint event to work you need to collect all drawing in a List of graphic actions; so you will want to make a nice class to store all parameters needed.
In your case you may want to consider a mixed approach: Old graphic actions draw into a bitmap, which is the e.g. BackgroundImage or Image of your control
Current/ongoing drawing are done on the surface. This amounts to using the bitmap as a cache, so you don't have to redraw lots of actions on every little change etc
This is closely related to your undo/redo implementation. You could set a limit and draw those before into a Btimap and those after onto the surface..
PS: You also should rethink your GC attitude. It is simple, efficient and a blessing to have around. (And, yes, I have done my share of TP&Delphi, way back when they were affordable..) - Yes, we do the coding, but GC is not about coding but about house keeping. Boring at best.. (And you can always design to avoid it, but not with a Graphics object in a windows system.)
A general rule for every class that implements IDisposable is to Dispose() it, as soon as possible. Make sure you know about the using(...){} statement.
For drawing in WinForms (GDI+) the best practice is indeed to use the Graphics object from PaintEventArgs. And because you didn't create that one, do not Dispose() it. Don't stash it either.
I have to completely disagree with other more experienced members here who say it's no big deal or in fact better to recreate the Graphics object over and over.
The HDC is a pointer to a HDC__ struct, which is a struct with one member, "int unused". It's an absolute waste and stupidity to create another instance/object every time drawing needs to be done. The HDC is NOT large, it's either 4 or 8 bytes, and the struct it points to is in nearly all cases 4 bytes. Furthermore, on the point that one person made, it doesn't help that the graphics object be made with the "static" keyword at the beginning of the WndProc() before the switch, because the only way to give the Graphics object a device context or handle to paint on is by calling its constructor, so "static" does nothing to save you from creating it over and over again.
On top of that Microsoft recommends that you create a HDC pointer and assign it to the same value PAINTSTRUCT already has, every, single WM_PAINT message it sends.
I'm sorry but the WinAPI in my opinion is very bad. Just as an example, I spent all day researching how to make a child WS_EX_LAYERED window, to find out that in order to enable Win 8 features one has to add code in XML with the OS's ID number to the manifest. Just ridiculous.
I found a memory leak in my XNA 4.0 application written in C#. The program needs to run for a long time (days) but it runs out of memory over the course of several hours and crashes. Opening Task Manager and watching the memory footprint, every second another 20-30 KB of memory is allocated to my program until it runs out. I believe the memory leak occurs when I set the BasicEffect.Texture property because that is the statement that finally throws the OutOfMemory exception.
The program has around 300 large (512px) textures stored in memory as Texture2D objects. The textures are not square or even powers of 2 - e.g. can be 512x431 - one side is always 512px. These objects are created only at initialization, so I am fairly confident it is not caused by creating/destroying Texture2D objects dynamically. Some interface elements create their own textures, but only ever in a constructor, and these interface elements are never removed from the program.
I am rendering texture mapped triangles. Before each object is rendered with triangles, I set the BasicEffect.Texture property to the already created Texture2D object and the BasicEffect.TextureEnabled property to true. I apply the BasicEffect in between each of these calls with BasicEffect.CurrentTechnique.Passes[0].Apply() - I'm aware that I'm calling Apply() twice as much as I should, but the code is wrapped inside of a helper class that calls Apply() whenever any property of BasicEffect changes.
I am using a single BasicEffect class for the entire application and I change its properties and call Apply() any time I render an object.
First, could it be that changing the BasicEffect.Texture property and calling Apply() so many times is leaking memory?
Second, is this the proper way to render triangles with different textures? E.g. using a single BasicEffect and updating its properties?
This code is taken from a helper class so I've removed all the fluff and only included the pertinent XNA calls:
//single BasicEffect object for entire application
BasicEffect effect = new BasicEffect(graphicsDevice);
// loaded from file at initialization (before any Draw() is called)
Texture2D texture1 = new Texture2D("image1.jpg");
Texture2D texture2 = new Texture2D("image2.jpg");
// render object 1
if(effect.Texture != texture1) // effect.Texture eventually throws OutOfMemory exception
effect.Texture = texture1;
effect.CurrentTechnique.Passes[0].Apply();
effect.TextureEnabled = true;
effect.CurrentTechnique.Passes[0].Apply();
graphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertices1, 0, numVertices1, indices1, 0, numTriangles1);
// render object 2
if(effect.Texture != texture2)
effect.Texture = texture2;
effect.CurrentTechnique.Passes[0].Apply();
effect.TextureEnabled = true;
effect.CurrentTechnique.Passes[0].Apply();
graphicsDevice.DrawUserIndexedPrimitives(PrimitiveType.TriangleList, vertices2, 0, numVertices2, indices2, 0, numTriangles2);
It's an XNA application, so 60 times per second I call my Draw method, which renders all my various interface elements. This means that I could be drawing between 100-200 textures per frame, and the more textures I draw, the faster I run out of memory, even though I am not calling new anywhere in the update/draw loops. I am more experienced with OpenGL than DirectX, so clearly something is happening behind the scenes that is creating unmanaged memory I'm not aware of.
My only suggestion to you that I can come up with, is to group your textures into atlases rather than one by one. It will speed up your rendering time and reduce the load on the GPU, provided that you are rendering a massive amount of textures per frame. The reason for this, is because the GPU will not have to swap textures to render so often which is an expensive operation. I am using my knowledge of OpenGL, but my guess is that XNA is based on DirectX and I am assuming they load textures in a similar manner (unless you are using Monogame which lets you use OpenGL)
That said, you did not provide very much information. The memory leak can be coming from the texture switching, but it can also be coming from somewhere else. Most of your memory is occupied with textures, which is probably why you are getting a crash there rather than somewhere else. I am guessing a few things are happening here:
The garbage collector is not working fast enough to pick up all the RAM being allocated inside the rendering functions
You have a memory leak somewhere else in your code, and it is showing up here instead
Once again, it is difficult to figure out what is on here provided how little I know about your code. But to my best ability, I have a few suggestions for you:
Run through your code and look at how you are referecing things. Make sure that you don't have any temporary references inside of classes and structs. If you use something and pass it around to difference classes and consider it later as "discarded", there is good chance that someone is still holding onto that object, preventing it from being deleted
Do a search for all the "new" keywords you have in the solution. If you have something that constantly gets using the "new" keyword, this can be a huge memory leak as it's creating a massive ammount of objects in the heap. The garbage collector SHOULD be picking them up, but I wouldn't say I trust the garbage collector very much. Worst case that garbage collector is not coming around often enough to deal with this memory leak.
Find ways to reduce the sizes of textures. Atlasing is a solution that will reduce the overhead of packaging each texture in it's own Texture2D. This can be a bit more work because you will have to work in a system of swapping textures from the same file, but in your case that very well maybe worth it.
If you are convinced that there is an issue within XNA, try a different implementation called Monogame. It follows the exact same structure as XNA but is maintained by a community. As a result, the guts of the libraries you are using have been rewritten and there is a good chance that whatever is destroying your heap has been fixed.
My advice to you? If you are really familiar with OpenGL and what you are doing is fairly simple, then I would check out OpenTK. it is a thin linker layer that takes OpenGL and "ports" it into C#. All the commands are the exact same and you have the flexibility of using the entire .NET library for all the extra fluff.
I hope this helps!
I have a function with the following code in it:
GameStateManagementGame.GraphicsDeviceManager.PreferredBackBufferWidth = width;
GameStateManagementGame.GraphicsDeviceManager.PreferredBackBufferHeight = height;
GameStateManagementGame.GraphicsDeviceManager.IsFullScreen = isFullScreen;
GameStateManagementGame.GraphicsDeviceManager.ApplyChanges();
When it's called at application start, if isFullScreen = true, there is very noticeable screen flicker for a second or 2 even if the width and height are the same as the desktop resolution. If I don't have the ApplyChanges(); call this doesn't happen (but the settings do get applied). If I call the function after the game has fully started without the ApplyChanges() call, the settings don't get applied.
Now I can solve this problem by putting something in to skip the ApplyChanges() call at startup but I'd like to know why this is happening.
The only information I've managed to find regarding this problem say that the flicker shouldn't happen if you're using the same resolution as the desktop or have provided overcomplicated and broken workarounds.
So my question is what is the reason for the behaviour described above and what's the best workaround?
The settings you set on GraphicsDeviceManager are applied in these cases:
If you call ApplyChanges()
If you call ToggleFullScreen()
By Game when Game.Run() is called (it creates the graphics device)
Noteably, modifying any of the settings will not cause those settings to be applied immediately.
The likely reason for your flickering is that you are doing #3 and then immediately doing #1 (you are applying the settings twice in a row).
For initial start-up, you should set the correct settings on the GraphicsDeviceManager instance during your game class's constructor. Then those settings will be correct when Game.Run() is called.
Use ApplyChanges() only when the user changes the settings while the game is running.
I am using glDrawPixels to display an image. I know, I should probably be using textures but there are reasons I'm not. Well at least not for now. Anyways, image being displayed is frequently being updated as if it is being scanned in. This works fine as long as I let it sit and finish the "scanning", however, if I click on the screen while the "scanning is still going on I get an AccessViolation Exception at my glDrawPixels.
Gl.glDrawPixels(mImageWidth, mImageHeight, Gl.GL_LUMINANCE, Gl.GL_UNSIGNED_SHORT, mDisplayBuffer);
mImageWidth and mImageHeight are the expected values so they are not this issue.
I put a for loop that looks at every element in mDisplayBuffer just before glDrawPixels call. No problem occurred here so the Access Violation doesn't seem to be coming from the mDisplayBuffer.
So it must be something within the glDrawPixels right?
I am using the TAO framework so that I can use C# and OpenGl.
What's the type of mDisplayBuffer ? Could it be being updated by another thread while the glDrawPixels is in progress, or relocated by the garbage collector (try a scoped lock around the DrawPixels call) ?