Losing anti aliasing when sharing Graphics object between managed and unmanaged code - c#

Passing Graphics object between native C++ and C#
I'm currently working on a Paint .NET-like application. I have mulitple types of layers which are implemented in C#. These layers are drawn into a .NET Graphics object that is provided by a WinForms user control - it is similar to the WPF canvas control. The layer base class has a Draw method that is implemented as follows:
public void Draw(IntPtr hdc)
{
using (var graphics = Graphics.FromInternalHDC(hdc)
{
// First: Setup rendering settings like SmoothingMode, TextRenderingHint, ...
// Layer specific drawing code goes here...
}
}
For performance and decompiling issues I'm doing the composition of the layers in a mixed mode assembly since I'm also applying effects like bevel or drop shadow. The wrapper, of course written in C++/CLI, gets directly called from the canvas control and forwards the metadata of each layer and the target Graphics object (Graphics object from my C# written canvas user control) to a native C++ class.
C++/CLI Wrapper:
public ref class RendererWrapper
{
public:
void Render(IEnumerable<Layer^>^ layersToDraw, Graphics^ targetGraphics)
{
// 1) For each layer get metadata (position, size AND Draw delegate)
// 2) Send layer metadata to native renderer
// 3) Call native renderer Render(targetGraphics.GetHDC()) method
// 4) Release targetGraphics HDC
};
}
Native C++ Renderer:
class NativeRenderer
{
void NativeRenderer::Render(vector<LayerMetaData> metaDataVector, HDC targetGraphicsHDC)
{
Graphics graphics(targetGraphicsHDC);
// Setup rendering settings (SmoothingMode, TextRenderingHint, ...)
for each metaData in metaDataVector
{
// Create bitmap and graphics for current layer
Bitmap* layerBitmap = new Bitmap(metaData.Width, metaData.Height, Format32bppArgb);
Graphics* layerGraphics = new Graphics(layerBitmap);
// Now the interesting interop part
// Get HDC from layerGraphics
HDC lgHDC = layerGraphics->GetHDC();
// Call metaData.Delegate and pass the layerGraphics HDC to C#
// By this call we are ending up in the Draw method of the C# Layer object
metaData.layerDrawDelegate(lgHDC);
// Releasing HDC - leaving interop...
layerGraphics->ReleaseHDC(lgHDC);
// Apply bevel/shadow effects
// Do some other fancy stuff
graphics.DrawImage(layerBitmap, metaData.X, metaData.Y, metaData.Width, metaData.Height);
}
}
}
So far so good. The above code works nearly as expected, but...
Problem
The only thing is that my current implementation is lacking of anti aliasing and semi transparency when rendering PNG with shadows for example. So I just have only 2 values for the Alpha channel: Transparent or full visible Color at 255. This side effect makes drawing PNGs with alpha channel and fonts looking very ugly. I cannot get the same smooth and nice semi transparent anti aliasing any more like before when I worked with pure C# code.
BUT: When drawing a string in the native Graphics object directly,
layerGraphics->DrawString(...);
anti aliasing and semi transparency are back for good. So the problem is only evident when passing the Graphics HDC to .NET.
Questions
Is there any solution/workaround for this problem? I've tried to create the Bitmap directly in the C# Layer class and return the IntPtr for the HBITMAP to the native code. This approach is working, but in this case I have another problem since I cannot find a perfect solution for converting HBITMAP to GDI+ Bitmap with alpha channel (white pixel noise is surrounding the edges when drawing fonts).
Thanks for your input! :)
Demo Solution
Attached you'll find a demo solution here: Sources
In this demo solution I'm testing 3 different rendering methods (all implemented in NativeRenderer.cpp), while the FIRST ONE shows the described problems:
1) RenderViaBitmapFromCSharp() - a) Creates a new bitmap in C++, creates a new Graphics object in C++, calls the C# drawing code by passing the C++ Graphics object HDC - Fails
But: b) Drawing directly from C++ works via the created bitmap too
2) RenderDirectlyFromCSharp() - Creates a new Graphics object from C# Graphics handle in C++
, calls the C# drawing code by passing the C++ Graphics object HDC - Works
3) RenderDirectlyFromCPP() - Creates a new Graphics object from C# Graphics handle in C++, draws the text directly in C++ - Works

Graphics graphics(targetGraphicsHDC);
You are creating a new Graphics object. So it won't have its properties setup like its original did. Properties like TextRenderingHint are not properties of a GDI device context, they are specific to Graphics.
Fairly ugly problem, you'll need to re-initialize the Graphics object the way it was done in the calling code. That's two chunks of code that are far removed from each other. Avoiding the conversion to HDC and back is the only really decent way to side-step the problem.

I've ended up creating the Bitmap in C# and passing the object to C++/CLI. As already mentioned by Hans and Vincent you have to avoid GetHDC. So my workaround reads in pseudo code as follows:
Layer.cs C#:
public Bitmap Draw()
{
var bitmap = new Bitmap(Width, Height, PixelFormat.Format32bppArgb);
using (var graphics = Graphics.FromBitmap(bitmap)
{
// First: Setup rendering settings like SmoothingMode, TextRenderingHint, ...
// Layer specific drawing code goes here...
}
return bitmap;
}
NativeRenderer.cs C++:
void NativeRenderer::RenderFromBitmapCSharp(System::Drawing::Bitmap^ bitmap)
{
// Create and lock empty native bitmap
Bitmap *gdiBitmap = new Bitmap(bitmap->Width, bitmap->Height, PixelFormat32bppARGB);
Rect rect(0, 0, bitmap->Width, bitmap->Height);
BitmapData bitmapData;
gdiBitmap->LockBits(&rect, Gdiplus::ImageLockModeRead | Gdiplus::ImageLockModeWrite, PixelFormat32bppARGB, &bitmapData);
// Lock managed bitmap
System::Drawing::Rectangle rectangle(0, 0, bitmap->Width, bitmap->Height);
System::Drawing::Imaging::BitmapData^ pBitmapData = bitmap->LockBits(rectangle, System::Drawing::Imaging::ImageLockMode::ReadOnly, System::Drawing::Imaging::PixelFormat::Format32bppArgb);
// Copy from managed to unmanaged bitmap
::memcpy(bitmapData.Scan0, pBitmapData->Scan0.ToPointer(), bitmap->Width * bitmap->Height * 4);
bitmap->UnlockBits(pBitmapData);
gdiBitmap->UnlockBits(&bitmapData);
// Draw it
_graphics->DrawImage(gdiBitmap, 0, 0, bitmap->Width, bitmap->Height);
}
Hope that is helpful to others - have not found any code snippet on the web which actually does converting managed to unmanaged GDI+ Bitmap.
Thank you all for your comments.
Cheers,
P

Related

Capturing a portion of the screen as fast as possible

I have made a tool which captures a portion of my screen.
All the pixels within that red square are loaded into a Bitmap:
private Bitmap Capture()
{
var bitmap = new Bitmap(measurementToolBounds.Width, measurementToolBounds.Height);
using (var g = Graphics.FromImage(bitmap))
{
g.CopyFromScreen(measurementToolBounds.Location, Point.Empty, measurementToolBounds.Size, CopyPixelOperation.SourceCopy);
g.Flush();
}
return bitmap;
}
Unfortunately, CopyFromScreen takes roughly 30ms to execute.
I've tried using gdi32.dll and BitBlt, too, but as far as I know, CopyFromScreen calls BitBlt behind the scenes.
Is there a faster way to do this using C# or perhaps even a 3rd party utility tool that you know of that I can use to retrieve a portion of the screen as a byte array?

Resizing swapchain causes Bitmap not to be usable (SharpDX, directX)

I want to resize my swapchain on screen resizing on my windows form application. When I do that, I need to dispose my older deviceContext, buffer, target etc...
Look at the code below :
Public Overrides Sub Resize(Width As Integer, Height As Integer)
m_backBuffer.Dispose()
m_d2dContext.Dispose()
m_2dTarget.Dispose()
m_swapChain.ResizeBuffers(2, Width, Height, Format.R8G8B8A8_UNorm, SwapChainFlags.None)
m_backBuffer = m_swapChain.GetBackBuffer(Of Surface)(0)
Dim properties As BitmapProperties = New BitmapProperties(New SharpDX.Direct2D1.PixelFormat(SharpDX.DXGI.Format.R8G8B8A8_UNorm, SharpDX.Direct2D1.AlphaMode.Premultiplied), 96, 96)
Dim dxgiDevice As SharpDX.DXGI.Device = m_device.QueryInterface(Of SharpDX.DXGI.Device)()
Dim d2dDevice As SharpDX.Direct2D1.Device = New SharpDX.Direct2D1.Device(dxgiDevice)
m_d2dContext = New SharpDX.Direct2D1.DeviceContext(d2dDevice, SharpDX.Direct2D1.DeviceContextOptions.None)
m_2dTarget = New SharpDX.Direct2D1.Bitmap(m_d2dContext, m_backBuffer, properties)
m_d2dContext.Target = m_2dTarget
CType(m_Context, GpuDrawingContext).setRenderTarget(m_d2dContext)
End Sub
The problem when I do that, is that the bitmap I had previously created to display on screen needed a DeviceContext as a parameter for their creation. However, now that I am instanciating a new DeviceContext on resizing, I get the error WrongFactory when I want to draw the bitmap on the deviceContext because their aren't created with the same DeviceContext from where we want to draw them.
Any solutions for the resize function ?
Your code seems to be fundamentally wrong. When handling resize you don't really need to call anything but ResizeBuffers. And you definitely don't need to dispose of m_d2dContext as typically you keep the same one for the lifetime of your application. The rest of your code actually belong to frame rendering, typically at each iteration you do the following:
obtain backbuffer (note that it should not be cached because after each Present swap chain will return different backbuffer surface)
create d2d bitmap object for backbuffer surface
dispose of backbuffer
set this d2d bitmap object as render targer of d2d device context
dispose of d2d bitmap (it is still alive as it is being used by dc)
begin draw
draw...
end draw
reset d2d device context render target
present fresh frame

BitBlt failing on window rendered by OpenGL

I'm making a program to take a screenshot of a game while I'm streaming, so it take a screenshot and save it automatically, but when I set the game to use OpenGL my function fail, it keep saving the same image over and over again, it just change the image after I restart the game.
It seems it works in the first run, but on the next ones it keep saving the first image.
Here is what I'm using:
public static Bitmap PrintWindow(IntPtr hwnd) {
try {
RECT rc;
GetClientRect(hwnd, out rc);
IntPtr hdcFrom = GetDC(hwnd);
IntPtr hdcTo = CreateCompatibleDC(hdcFrom);
//X and Y coordinates of window
int Width = rc.right;
int Height = rc.bottom;
Bitmap bmp = null;
IntPtr hBitmap = CreateCompatibleBitmap(hdcFrom, Width, Height);
if (hBitmap != IntPtr.Zero) {
// adjust and copy
IntPtr hLocalBitmap = SelectObject(hdcTo, hBitmap);
BitBlt(hdcTo, 0, 0, Width, Height, hdcFrom, 0, 0, CopyPixelOperation.SourceCopy);
SelectObject(hdcTo, hLocalBitmap);
//We delete the memory device context.
DeleteDC(hdcTo);
//We release the screen device context.
ReleaseDC(hwnd, hdcFrom);
//Image is created by Image bitmap handle and assigned to Bitmap variable.
bmp = System.Drawing.Image.FromHbitmap(hBitmap);
//Delete the compatible bitmap object.
DeleteObject(hBitmap);
bmp.Save("saving.png", System.Drawing.Imaging.ImageFormat.Png);
}
return bmp;
}
catch {
}
return new Bitmap(0, 0);
}
If I change the game graphic to use DirectX it works good, it's just happening while using OpenGL, so not sude if it must be different for openGL windows or if it's impossible to capture those kind of window.
Using double buffered OpenGL is mutually exclusive with using GDI operations¹. Use glReadPixels to take a screenshot.
¹: Well, technically if you know what you're doing and take the right precautions you can mix them. But it's more trouble than it's worth doing.
After a recent windows 10 update, DirectX rendered windows now suffer from the same problem. I found a workaround, but you're not going to like it...
If you set hwnd = 0, the BitBlt now refers to the whole screen, instead of just one window. Then you can change BitBlt's source offset values to grab only the target window.
Although this works, its much slower than the original way you had it. :(
On my laptop, grabbing a 1080p window used to take 3ms, but now, using this workaround, it takes 27ms, which really ruins the streaming performance. :(
Still, its better than nothing.

How does the Graphics CopyFromScreen method copy into a bitmap?

private void startBot_Click(object sender, EventArgs e)
{
Bitmap bmpScreenshot = Screenshot();
this.BackgroundImage = bmpScreenshot;
}
private Bitmap Screenshot()
{
// This is where we will store a snapshot of the screen
Bitmap bmpScreenshot =
new Bitmap(Screen.PrimaryScreen.Bounds.Width,Screen.PrimaryScreen.Bounds.Height);
// Creates a graphic object so we can draw the screen in the bitmap (bmpScreenshot);
Graphics g = Graphics.FromImage(bmpScreenshot);
// Copy from screen into the bitmap we created
g.CopyFromScreen(0, 0, 0, 0, Screen.PrimaryScreen.Bounds.Size);
// Return the screenshot
return bmpScreenshot;
}
I've been recently playing around with C# and I'm just following some tutorial, I just don't understand how if I was to erase Graphics g it wouldn't put the image as the background, but at no point does the code assign any relation between the variables, other than Graphics g = Graphics.FromImage(bmpScreenshot), then g is given some parameters, but then we return bmpScreenshot which just doesn't make any sense, I would expect g to be returned?
Devices that can display graphics are virtualized in Windows. The concept is called a "device context" in the winapi, underlying representation is a "handle". The Graphics class wraps that handle, it does not itself store the pixels. Note the Graphics.GetHdc() method, a way to get to that handle.
The class otherwise just contains the drawing methods that produce graphics output on the device represented by that handle. Actual devices can be the screen, a printer, a metafile, a bitmap. With the big advantage in your own code that it can be used to produce output where ever you want it to go. So printing is just as easy as painting it to the screen or drawing to a bitmap that you store to a file.
So by calling Graphics.FromImage(), you associate the Graphics object to the bitmap. All of its draw methods actually set pixels in the bitmap. Like CopyFromScreen(), it simply copies pixels from the video adapter's frame buffer to the device context, in effect setting the pixels in the bitmap. So the expected return value of this code is the actual bitmap. The Graphics object should be disposed before that happens since it is no longer useful. Or in other words, the underlying handle needs to be released so the operating system de-allocates its own resources to represent the device context.
That's a bug in the code snippet. Repeated calls to this method can easily crash the program when Windows refuses to create more device contexts. And the garbage collector doesn't otherwise catch up fast enough. It should be written as:
using (var g = Graphics.FromImage(bmpScreenshot)) {
g.CopyFromScreen(0, 0, 0, 0, Screen.PrimaryScreen.Bounds.Size);
return bmpScreenshot;
}
The thing to understand is that Graphics g = Graphics.FromImage(bmpScreenshot) creates a Graphics context for drawing into the image that was passed as an argument (bmpScreenshot).
So, after you create the graphics content:
Graphics g = Graphics.FromImage(bmpScreenshot)
When you copy from the screen:
g.CopyFromScreen(0, 0, 0, 0, Screen.PrimaryScreen.Bounds.Size);
This manipulates the bmpScreenshot Bitmap which Graphics g holds a reference to.
From the documentation:
image [in]:
Type: Image*
Pointer to an Image object that will be associated with the new Graphics object.

C# opengl context handle getter returns wrong address

Problem solved!
Deleted pragma of sharing from kernel string.(using opencl 1.2)
Reordered GL-VBO-creating and CL-Context-Creating. First create CL-context from gl-context. Then create GL-VBO. Then acquire it by cl. Then compute. Then release by cl. Then bind by gl. Draw. Finish gl. Start over. Use clFinish always to ensure it synchs with gl. For more speed, clflush can be okay maybe even an implicit sync can be done which I did not try.
[original question from here]
In C#, context construction for opencl-gl-interop fails because handle getter function gives wrong address and causes System.AccessViolationException.
C# part:
[DllImport("opengl32.dll",EntryPoint="wglGetCurrentDC")]
extern static IntPtr wglGetCurrentDC();//CAl
[DllImport("opengl32.dll", EntryPoint = "wglGetCurrentContext")]
extern static IntPtr wglGetCurrentContext();// DCAl
C++ part in opencl(this is in a wrapper class of C++ opencl):
pl = new cl_platform_id[2];
clGetPlatformIDs( 1, pl, NULL);
cl_context_properties props[] ={ CL_GL_CONTEXT_KHR, (cl_context_properties)CAl,
CL_WGL_HDC_KHR, (cl_context_properties)DCAl,CL_CONTEXT_PLATFORM,
(cl_context_properties)&pl[0], 0};
ctx=cl::Context(CL_DEVICE_TYPE_GPU,props,NULL,NULL,NULL);//error comes from here
//ctx=cl::Context(CL_DEVICE_TYPE_GPU); this does not interop >:c
What is wrong in these parts? When I change "opengl32.dll" to "opengl64.dll" compiler/linker cannot find it.
Calling wglGetCurrentDC() and wglGetCurrentContext() after glControl1 is loaded but these seem to be giving wrong addresses. Calling wglMakeCurrent() or glControl1.MakeCurrent() before those did not solve the problem too.
OS: 64 bit windows7
Host: fx8150
Device: HD7870
MSVC2012(windows forms application) + OpenTK(2010_10_6) + Khronos opencl 1.2 headers
Build target is x64(release).
Note: opencl part is working well for computing(sgemm) and opengl part is drawing VBO well (some plane built of triangles with some color and normals) but opencl part(context) refuses to interop.
Edit: Adding #pragma OPENCL EXTENSION cl_khr_gl_sharing : enable into kernel string did not solve the problem.
Edit: Creating GL VBOs "after" the construction of cl context, error vanishes but nothing is updated by opencl kernel. Weird. PLus, when I delete cl_khr_sharing pragma, the 3D shape starts artifacting which means opencl is doing something now but its just random deleted pixels and some cropped areas which I did not wrote in kernel. Weirdier. You can see this in below picture(I am trying to make the flat blue sheet disappear but it doesnt fully disappear and also i try changing color and that is not changing)
Edit: CMSoft's opencltemplate looks like what I need to learn/do but their example code consists only 6-7 lines of code! I dont know where to put compute kernel and where to get/set initial data, but that example works great(gives hundreds of "WARNING! ComputeBuffer{T}(575296656) leaked." by the way).
Edit: In case you wonder, here is kernel arguments' construction in C++:
//v1,v2,v3,v4 are unsigned int taken from `bindbuffer` of GL in C#
//so v1 is buf[0] and v2 is buf[1] and so goes like this
glBuf1=cl::BufferGL(ctx,CL_MEM_READ_WRITE,v1,0);
glBuf2=cl::BufferGL(ctx,CL_MEM_READ_WRITE,v2,0);
glBuf3=cl::BufferGL(ctx,CL_MEM_READ_WRITE,v3,0);
glBuf4=cl::BufferGL(ctx,CL_MEM_READ_WRITE,v4,0);
and here is how set into command queue:
v.clear();
v.push_back(glBuf1);
v.push_back(glBuf2);
v.push_back(glBuf3);
v.push_back(glBuf4);
cq.enqueueAcquireGLObjects(&v,0,0);
cq.finish();
and here is how I set as arguments of kernel:
kernel.setArg(0,glBuf1);
kernel.setArg(1,glBuf2);
kernel.setArg(2,glBuf3);
kernel.setArg(3,glBuf3);
here is how executed:
cq.enqueueNDRangeKernel(kernel,referans,Global,Local);
cq.flush();
cq.finish();
here is how released:
cq.enqueueReleaseGLObjects(&v,0,0);
cq.finish();
Simulation iteration:
for (int i = 0; i < 200; i++)
{
GL.Finish(); // lets cl take over
//cl acquires buffers in glTest
clh.glTest(gci.buf[0], gci.buf[1], gci.buf[2], gci.buf[3]);// then computes
// then releases
Thread.Sleep(50);
glControl1.MakeCurrent();
glControl1.Invalidate();
gci.ciz(); //draw
}

Categories

Resources