What governs DC scaling? - c#

This code gets different scaling depending on which computer I run it on.
Metafile image;
IntPtr dib;
var memoryHdc = Win32Utils.CreateMemoryHdc(IntPtr.Zero, 1, 1, out dib);
try
{
image = new Metafile(memoryHdc, EmfType.EmfOnly);
using (var g = Graphics.FromImage(image))
{
Render(g, html, left, top, maxWidth, cssData, stylesheetLoad, imageLoad);
}
}
finally
{
Win32Utils.ReleaseMemoryHdc(memoryHdc, dib);
}
Going into the Render method, the Metafile object has a PixelFormat of DontCare and consequently does not have valid vertical or horizontal resolutions.
Coming out of the Render method, it has a value of Format32bppRgb and PhysicalDimension.Width and PhysicalDimension.Height have increased to accommodate the rendered image.
How can I make scaling independent of local settings?
Here's the implementation of CreateMemoryHdc (I didn't write it, it's from an OSS library).
public static IntPtr CreateMemoryHdc(IntPtr hdc, int width, int height, out IntPtr dib)
{
// Create a memory DC so we can work off-screen
IntPtr memoryHdc = CreateCompatibleDC(hdc);
SetBkMode(memoryHdc, 1);
// Create a device-independent bitmap and select it into our DC
var info = new BitMapInfo();
info.biSize = Marshal.SizeOf(info);
info.biWidth = width;
info.biHeight = -height;
info.biPlanes = 1;
info.biBitCount = 32;
info.biCompression = 0; // BI_RGB
IntPtr ppvBits;
dib = CreateDIBSection(hdc, ref info, 0, out ppvBits, IntPtr.Zero, 0);
SelectObject(memoryHdc, dib);
return memoryHdc;
}
As you can see, the width, height and bit depth passed to the DC constructor are constant. Creating the metafile produces different physical dimensions. Right after executing this
image = new Metafile(memoryHdc, EmfType.EmfOnly);
the metafile has PhysicalDimension.Height (and width) of 26.43 on my workstation and 31.25 on the server to which I am deploying, so the difference in scaling is already evident and therefore probably not a consequence of anything in the rendering.
This may be relevant. BitMapInfo is defined in the OSS library and looks like this:
internal struct BitMapInfo
{
public int biSize;
public int biWidth;
public int biHeight;
public short biPlanes;
public short biBitCount;
public int biCompression;
public int biSizeImage;
public int biXPelsPerMeter;
public int biYPelsPerMeter;
public int biClrUsed;
public int biClrImportant;
public byte bmiColors_rgbBlue;
public byte bmiColors_rgbGreen;
public byte bmiColors_rgbRed;
public byte bmiColors_rgbReserved;
}
so possibly setting biXPelsPerMeter and biYPelsPerMeter will help. The above code doesn't set them and may be allowing platform values.
Unfortunately, setting these values doesn't seem to make any difference. msdn says
biXPelsPerMeter
The horizontal resolution, in pixels-per-meter, of the
target device for the bitmap. An application can use this value to
select a bitmap from a resource group that best matches the
characteristics of the current device.
So these settings are used when loading a bitmap from a resource. No help here.
This all looks pertinent https://www.codeproject.com/articles/177394/%2fArticles%2f177394%2fWorking-with-Metafile-Images-in-NET
It may help to know that this code does not run in an application. It renders HTML as a metafile for printing, and it lives inside a Web API webservice.
There is no user interface so I'm not sure how to interpret the question of whether it is DPI Aware. The evidence suggests it's DPI affected so the question may be pertinent.

GDI doesn't scale. Use GDI+ for device independence. You will lose antialiasing but most print devices are high DPI anyway.
Does the library in use have an option to use GDI+ instead?
(In my own case, yes. Problem solved.)

Related

GDI+ DrawImage notably slower in C++ (Win32) than in C# (WinForms)

I am porting an application from C# (WinForms) to C++ and noticed that drawing an image using GDI+ is much slower in C++, even though it uses the same API.
The image is loaded at application startup into a System.Drawing.Image or Gdiplus::Image, respectively.
The C# drawing code is (directly in the main form):
public Form1()
{
this.SetStyle(ControlStyles.UserPaint | ControlStyles.AllPaintingInWmPaint | ControlStyles.OptimizedDoubleBuffer, true);
this.image = Image.FromFile(...);
}
private readonly Image image;
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
var sw = Stopwatch.StartNew();
e.Graphics.TranslateTransform(this.translation.X, this.translation.Y); /* NOTE0 */
e.Graphics.DrawImage(this.image, 0, 0, this.image.Width, this.image.Height);
Debug.WriteLine(sw.Elapsed.TotalMilliseconds.ToString()); // ~3ms
}
Regarding SetStyle: AFAIK, these flags (1) make WndProc ignore WM_ERASEBKGND, and (2) allocate a temporary HDC and Graphics for double buffered drawing.
The C++ drawing code is more bloated.
I have browsed the reference source of System.Windows.Forms.Control to see how it handles HDC and how it implements double buffering.
As far as I can tell, my implementation matches that closely (see NOTE1) (note that I implemented it in C++ first and then looked at how it's in the .NET source -- I may have overlooked things).
The rest of the program is more or less what you get when you create a fresh Win32 project in VS2019. All error handling omitted for readability.
// In wWinMain:
Gdiplus::GdiplusStartupInput gdiplusStartupInput;
Gdiplus::GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
gdip_bitmap = Gdiplus::Image::FromFile(...);
// In the WndProc callback:
case WM_PAINT:
// Need this for the back buffer bitmap
RECT client_rect;
GetClientRect(hWnd, &client_rect);
int client_width = client_rect.right - client_rect.left;
int client_height = client_rect.bottom - client_rect.top;
// Double buffering
HDC hdc0 = BeginPaint(hWnd, &ps);
HDC hdc = CreateCompatibleDC(hdc0);
HBITMAP back_buffer = CreateCompatibleBitmap(hdc0, client_width, client_height); /* NOTE1 */
HBITMAP dummy_buffer = (HBITMAP)SelectObject(hdc, back_buffer);
// Create GDI+ stuff on top of HDC
Gdiplus::Graphics *graphics = Gdiplus::Graphics::FromHDC(hdc);
QueryPerformanceCounter(...);
graphics->DrawImage(gdip_bitmap, 0, 0, bitmap_width, bitmap_height);
/* print performance counter diff */ // -> ~27 ms typically
delete graphics;
// Double buffering
BitBlt(hdc0, 0, 0, client_width, client_height, hdc, 0, 0, SRCCOPY);
SelectObject(hdc, dummy_buffer);
DeleteObject(back_buffer);
DeleteDC(hdc); // This is the temporary double buffer HDC
EndPaint(hWnd, &ps);
/* NOTE1 */: In the .NET source code they don't use CreateCompatibleBitmap, but CreateDIBSection instead.
That improves performance from 27 ms to 21 ms and is very cumbersome (see below).
In both cases I am calling Control.Invalidate or InvalidateRect, respectively, when the mouse moves (OnMouseMove, WM_MOUSEMOVE). The goal is to implement panning with the mouse using SetTransform - that's irrelevant for now as long as draw performance is bad.
NOTE2: https://stackoverflow.com/a/1617930/653473
This answer suggests that using Gdiplus::CachedBitmap is the trick. However, I can find no evidence in the C# WinForms source code that it makes use of cached bitmaps in any way - the C# code uses GdipDrawImageRectI which maps to GdipDrawImageRectI, which maps to Graphics::DrawImage(IN Image* image, IN INT x, IN INT y, IN INT width, IN INT height).
Regarding /* NOTE1 */, here is the replacement for CreateCompatibleBitmap (just substitute CreateVeryCompatibleBitmap):
bool bFillBitmapInfo(HDC hdc, BITMAPINFO *pbmi)
{
HBITMAP hbm = NULL;
bool bRet = false;
// Create a dummy bitmap from which we can query color format info about the device surface.
hbm = CreateCompatibleBitmap(hdc, 1, 1);
pbmi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
// Call first time to fill in BITMAPINFO header.
GetDIBits(hdc, hbm, 0, 0, NULL, pbmi, DIB_RGB_COLORS);
if ( pbmi->bmiHeader.biBitCount <= 8 ) {
// UNSUPPORTED
} else {
if ( pbmi->bmiHeader.biCompression == BI_BITFIELDS ) {
// Call a second time to get the color masks.
// It's a GetDIBits Win32 "feature".
GetDIBits(hdc, hbm, 0, pbmi->bmiHeader.biHeight, NULL, pbmi, DIB_RGB_COLORS);
}
bRet = true;
}
if (hbm != NULL) {
DeleteObject(hbm);
hbm = NULL;
}
return bRet;
}
HBITMAP CreateVeryCompatibleBitmap(HDC hdc, int width, int height)
{
BITMAPINFO *pbmi = (BITMAPINFO *)LocalAlloc(LMEM_ZEROINIT, 4096); // Because otherwise I would have to figure out the actual size of the color table at the end; whatever...
bFillBitmapInfo(hdc, pbmi);
pbmi->bmiHeader.biWidth = width;
pbmi->bmiHeader.biHeight = height;
if (pbmi->bmiHeader.biCompression == BI_RGB) {
pbmi->bmiHeader.biSizeImage = 0;
} else {
if ( pbmi->bmiHeader.biBitCount == 16 )
pbmi->bmiHeader.biSizeImage = width * height * 2;
else if ( pbmi->bmiHeader.biBitCount == 32 )
pbmi->bmiHeader.biSizeImage = width * height * 4;
else
pbmi->bmiHeader.biSizeImage = 0;
}
pbmi->bmiHeader.biClrUsed = 0;
pbmi->bmiHeader.biClrImportant = 0;
void *dummy;
HBITMAP back_buffer = CreateDIBSection(hdc, pbmi, DIB_RGB_COLORS, &dummy, NULL, 0);
LocalFree(pbmi);
return back_buffer;
}
Using a very compatible bitmap as the back buffer improves performance from 27 ms to 21 ms.
Regarding /* NOTE0 */ in the C# code -- the code is only fast if the transformation matrix doesn't scale. C# performance drops slightly when upscaling (~9ms), and drops significantly (~22ms) when downsampling.
This hints to: DrawImage probably wants to BitBlt if possible. But it can't in my C++ case because the Bitmap format (that was loaded from disk) is different from the back buffer format or something.
If I create a new more compatible bitmap (this time no clear difference between CreateCompatibleBitmap and CreateVeryCompatibleBitmap), and then draw the original bitmap onto that, and then only use the more compatible bitmap in the DrawImage call, then performance increases to about 4.5 ms. It also has the same performance characteristics when scaling now as the C# code.
if (better_bitmap == NULL)
{
HBITMAP tmp_bitmap = CreateVeryCompatibleBitmap(hdc0, gdip_bitmap->GetWidth(), gdip_bitmap->GetHeight());
HDC copy_hdc = CreateCompatibleDC(hdc0);
HGDIOBJ old = SelectObject(copy_hdc, tmp_bitmap);
Gdiplus::Graphics *copy_graphics = Gdiplus::Graphics::FromHDC(copy_hdc);
copy_graphics->DrawImage(gdip_bitmap, 0, 0, gdip_bitmap->GetWidth(), gdip_bitmap->GetHeight());
// Now tmp_bitmap contains the image, hopefully in the device's preferred format
delete copy_graphics;
SelectObject(copy_hdc, old);
DeleteDC(copy_hdc);
better_bitmap = Gdiplus::Bitmap::FromHBITMAP(tmp_bitmap, NULL);
}
BUT it's still consistently slower, there must be something missing still. And it raises a new question: Why is this not necessary in C# (same image and same machine)? Image.FromFile does not convert the bitmap format on loading as far as I can tell.
Why is the DrawImage call in the C++ code still slower, and what do I need to do to make it as fast as in C#?
I ended up replicating more of the .NET code insanity.
The magic call that makes it go fast is GdipImageForceValidation in System.Drawing.Image.FromFile. This function is basically not documented at all, and it is not even [officially] callable from C++. It is merely mentioned here: https://learn.microsoft.com/en-us/windows/win32/gdiplus/-gdiplus-image-flat
Gdiplus::Image::FromFile and GdipLoadImageFromFile don't actually load the full image into memory. It effectively gets copied from the disk every time it is being drawn. GdipImageForceValidation forces the image to be loaded into memory, or so it seems...
My initial idea of copying the image into a more compatible bitmap was on the right track, but the way I did it does not yield the best performance for GDI+ (because I used a GDI bitmap from the original HDC). Loading the image directly into a new GDI+ bitmap, regardless of pixel format, yields the same performance characteristics as seen in the C# implementation:
better_bitmap = new Gdiplus::Bitmap(gdip_bitmap->GetWidth(), gdip_bitmap->GetHeight(), PixelFormat24bppRGB);
Gdiplus::Graphics *graphics = Gdiplus::Graphics::FromImage(better_bitmap);
graphics->DrawImage(gdip_bitmap, 0, 0, gdip_bitmap->GetWidth(), gdip_bitmap->GetHeight());
delete graphics;
Even better yet, using PixelFormat32bppPARGB further improves performance substantially - the premultiplied alpha pays off when the image is repeatedly drawn (regardless of whether the source image has an alpha channel).
It seems calling GdipImageForceValidation effectively does something similar internally, although I don't know what it really does. Because Microsoft made it as impossible as they could to call the GDI+ flat API from C++ user code, I just modified Gdiplus::Image in my Windows SDK headers to include an appropriate method. Copying the bitmap explicitly to PARGB seems cleaner to me (and yields better performance).
Of course, after one finds out which undocumented function to use, google would also give some additional information: https://photosauce.net/blog/post/image-scaling-with-gdi-part-5-push-vs-pull-and-image-validation
GDI+ is not my favorite API.

Detect game hack through screenshot analysis C#

I'm trying to write some code to detect a wallhack for a game.
Basically, some hacks exist which create a windows aero transparent window, and they draw the hack onto this external window, so it can't be detected by taking a screenshot of the game itself.
My approach at the moment is to -
1. take a screenshot of the game window.
2. take a screenshot of the windows desktop for the same coordinates.
3. perform image analysis to compare screenshot 1 to screenshot 2 to see if there is a difference.
My problem is that screenshot 1 and screenshot 2 are not performed simultaneously so new game frames can be drawn between the two screenshots, causing false positives when the images are compared.
I want to know if there is a way to coordinate the screenshots so they occur at exactly the same time ? or somehow stop the screen drawing any new frames until my screenshots are finished?
This is the code I use for taking screenshots.
Note, I have even tried to take the 2 screenshots in parallel by queuing two work items.
However, even this doesn't result in the screenshots happening at exactly the same time.
So I wonder if there is some way to stop any further updates to screen from the graphics card until my screenshots finish? Or any other way I can do this?
public void DoBitBlt(IntPtr dest, int width, int height, IntPtr src)
{
GDI32.BitBlt(dest, 0, 0, width, height, src, 0, 0, GDI32.SRCCOPY);
}
public struct Windows
{
public Bitmap window;
public Bitmap desktop;
}
public Windows CaptureWindows(IntPtr window, IntPtr desktop, User32.RECT coords)
{
Windows rslt = new Windows();
// get te hDC of the target window
IntPtr hdcSrcWindow = User32.GetWindowDC(window);
IntPtr hdcSrcDesktop = User32.GetWindowDC(desktop);
// get the size
int width = coords.right - coords.left;
int height = coords.bottom - coords.top;
// create a device context we can copy to
IntPtr hdcDestWindow = GDI32.CreateCompatibleDC(hdcSrcWindow);
IntPtr hdcDestDesktop = GDI32.CreateCompatibleDC(hdcSrcDesktop);
// create a bitmap we can copy it to,
// using GetDeviceCaps to get the width/height
IntPtr hBitmapWindow = GDI32.CreateCompatibleBitmap(hdcSrcWindow, width, height);
IntPtr hBitmapDesktop = GDI32.CreateCompatibleBitmap(hdcSrcDesktop, width, height);
// select the bitmap object
IntPtr hOldWindow = GDI32.SelectObject(hdcDestWindow, hBitmapWindow);
IntPtr hOldDesktop = GDI32.SelectObject(hdcDestDesktop, hBitmapDesktop);
// bitblt over
var handle1 = new ManualResetEvent(false);
var handle2 = new ManualResetEvent(false);
Action actionWindow = () => { try { DoBitBlt(hdcDestWindow, width, height, hdcSrcWindow); } finally { handle1.Set(); } };
Action actionDesktop = () => { try { DoBitBlt(hdcDestDesktop, width, height, hdcSrcDesktop); } finally { handle2.Set(); } };
ThreadPool.QueueUserWorkItem(x => actionWindow());
ThreadPool.QueueUserWorkItem(x => actionDesktop());
WaitHandle.WaitAll(new WaitHandle[] { handle1, handle2 });
rslt.window = Bitmap.FromHbitmap(hBitmapWindow);
rslt.desktop = Bitmap.FromHbitmap(hBitmapDesktop);
// restore selection
GDI32.SelectObject(hdcDestWindow, hOldWindow);
GDI32.SelectObject(hdcDestDesktop, hOldDesktop);
// clean up
GDI32.DeleteDC(hdcDestWindow);
GDI32.DeleteDC(hdcDestDesktop);
User32.ReleaseDC(window, hdcSrcWindow);
User32.ReleaseDC(desktop, hdcSrcDesktop);
// free up the Bitmap object
GDI32.DeleteObject(hBitmapWindow);
GDI32.DeleteObject(hBitmapDesktop);
return rslt;
}
You are not going to be able to have both screenshots simultaneously, unless you resource to some graphic accelarators, meaning that that will not work in every computer...
About stopping rendering, as it is a game, I think this is not so good idea... you want your game to run smoothly.
Instead I would like to suggest to store recently rendered images of your game in memory, and when you take the screenshot compare it to them. If you can add some visual clue to decide which of the recent frames to compare to then it will work much better, because otherwise you will have to compare the screenshot to all of them and that will certainly eat some CPU/GPU time.
Are you using GDI to render? if so, what you want is to store the frames of your game in DIBs (Device Independent Bitmaps) to be able to compare them.
As for the clue to decide which image to use, I would go for some sort of time representation on screen, maybe a single pixel that changes color. If so, you will read the color of that pixel, use it to find the right frame, and them proced to compare the whole picture.

glTexImage2D + byte[]

How can I upload pixels from a simple byte array to an OpenGl texture ?
I'm using glTexImage2D and all I get is a white rectangle instead of a pixelated texture. The 9th parameter (32-bit pointer to the pixel data) is IMO the problem. I tried lots of parameter types there (byte, ref byte, byte[], ref byte[], int & IntPtr + Marshall, out byte, out byte[], byte*). glGetError() always returns GL_NO_ERROR. There must be something I'm doing wrong because it's never some gibberish pixels. It's always white. glGenTextures works correct. The first id has the value 1 like always in OpenGL. And I draw colored lines without any problem. So something is wrong with my texturing. I'm in control of the DllImport. So I can change the parameter types if necessary.
GL.glBindTexture(GL.GL_TEXTURE_2D, id);
int w = 4;
int h = 4;
byte[] bytes = new byte[w * h * 4];
for (int i = 0; i < bytes.Length; i++)
bytes[i] = (byte)Utils.random(256);
GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, w, h, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, bytes);
[DllImport(GL_LIBRARY)] public static extern void glTexImage2D(uint what, int level, int internalFormat, int width, int height, int border, int format,
int type, byte[] bytes);
A common mistake is not change the MIN filter, since the default is mipmapped, which makes textures incomplete. Do this:
GL.glBindTexture(GL_TEXTURE_2D, id);
GL.glTexParameteri(GL_TEXTURE_2D, GL_MIN_FILTER, GL_NEAREST);
Then draw the texture.
A texture remaining white despite something has been uploaded is a indicator for either not all mipmap levels being uploaded properly, or filter settings not set correctly.
Of interest is then
glTexParameteri(GL_TEXTURE_..., GL_MIN_FILTER, GL_...);
With GL_NEAREST or GL_LINEAR to disable mipmapping. Mipmapping is enabled by default.
The other important thing is to set the structure of the data before uploading, i.e. calling glTexImage. For this you use the function glPixelStorei to set the GL_UNPACK_... parameters. You need to set things like alignment, stride, and so on. I refer you to the documentation.
Your P/Invoke declaration is wrong.
The short answer is:
[System.Runtime.InteropServices.DllImport(Library, EntryPoint = "glTexImage2D", ExactSpelling = true)]
internal extern static void glTexImage2D(int target, int level, int internalformat, Int32 width, Int32 height, int border, int format, int type, IntPtr pixels);
This P/Invoke declaration is a safe one (not using directly pointers, but IntPtr).
The problem is the .NET memory management. Memory blocks are not fixed on a certain memory address: the garbage collector (GC) is free to move memory everywhere (for example, it can move a stack allocated memory in the heap space and viceversa).
Indeed, what you need is to tell to the .NET GC that the memory shall not moved. To do so, you shall use the fixed statement or other garbage collector related method.
For example:
public static void TexImage2D(int target, int level, int internalformat, Int32 width, Int32 height, int border, int format, int type, object pixels) {
GCHandle pp_pixels = GCHandle.Alloc(pixels, GCHandleType.Pinned);
try {
if (Delegates.pglTexImage2D != null)
Delegates.pglTexImage2D(target, level, internalformat, width, height, border, format, type, pp_pixels.AddrOfPinnedObject());
else
throw new InvalidOperationException("binding point TexImage2D cannot be found");
} finally {
pp_pixels.Free();
}
}
The object parameter of the TexImage2D function is meant to be used with any array of data (those objects implementing the Array class (that is, byte[], short[], int[] and so on).
Essentially the code above tell to the GC: take the address of pixels, and don't move it untill I call Free() on the memory handle.
Usin the fixed statement is another option, but requires an unsafe P/Invoke declaration and it's a little more verbose to use it in the code (for each call you have to define an unsafe and fixed statements).

Determining exact glyph height in specified font

I have searched a lot and tried much but I can not find the proper solution.
I wonder is there any approach for determining exact glyph height in specified font?
I mean here when I want to determine the height of DOT glyph I should receive small height but not height with paddings or the font size.
I have found the solution for determining exact glyph width here (I have used the second approach) but it does not work for height.
UPDATE: I need solution for .NET 1.1
It's not that hard to get the character metrics. GDI contains a function GetGlyphOutline that you can call with the GGO_METRICS constant to get the height and width of the smallest enclosing rectangle required to contain the glyph when rendered. I.e, a 10 point glyph for a dot in font Arial will give a rectangle of 1x1 pixels, and for the letter I 95x.14 if the font is 100 points in size.
These are the declaration for the P/Invoke calls:
// the declarations
public struct FIXED
{
public short fract;
public short value;
}
public struct MAT2
{
[MarshalAs(UnmanagedType.Struct)] public FIXED eM11;
[MarshalAs(UnmanagedType.Struct)] public FIXED eM12;
[MarshalAs(UnmanagedType.Struct)] public FIXED eM21;
[MarshalAs(UnmanagedType.Struct)] public FIXED eM22;
}
[StructLayout(LayoutKind.Sequential)]
public struct POINT
{
public int x;
public int y;
}
[StructLayout(LayoutKind.Sequential)]
public struct POINTFX
{
[MarshalAs(UnmanagedType.Struct)] public FIXED x;
[MarshalAs(UnmanagedType.Struct)] public FIXED y;
}
[StructLayout(LayoutKind.Sequential)]
public struct GLYPHMETRICS
{
public int gmBlackBoxX;
public int gmBlackBoxY;
[MarshalAs(UnmanagedType.Struct)] public POINT gmptGlyphOrigin;
[MarshalAs(UnmanagedType.Struct)] public POINTFX gmptfxGlyphOrigin;
public short gmCellIncX;
public short gmCellIncY;
}
private const int GGO_METRICS = 0;
private const uint GDI_ERROR = 0xFFFFFFFF;
[DllImport("gdi32.dll")]
static extern uint GetGlyphOutline(IntPtr hdc, uint uChar, uint uFormat,
out GLYPHMETRICS lpgm, uint cbBuffer, IntPtr lpvBuffer, ref MAT2 lpmat2);
[DllImport("gdi32.dll", ExactSpelling = true, PreserveSig = true, SetLastError = true)]
static extern IntPtr SelectObject(IntPtr hdc, IntPtr hgdiobj);
The actual code, rather trivial, if you don't consider the P/Invoke redundancies. I tested the code, it works (you can adjust for getting the width as well from GLYPHMETRICS).
Note: this is ad-hoc code, in the real world, you should clean up the HDC's and objects with ReleaseHandle and DeleteObject. Thanks to a comment by user2173353 to point this out.
// if you want exact metrics, use a high font size and divide the result
// otherwise, the resulting rectangle is rounded to nearest int
private int GetGlyphHeight(char letter, string fontName, float fontPointSize)
{
// init the font. Probably better to do this outside this function for performance
Font font = new Font(new FontFamily(fontName), fontPointSize);
GLYPHMETRICS metrics;
// identity matrix, required
MAT2 matrix = new MAT2
{
eM11 = {value = 1},
eM12 = {value = 0},
eM21 = {value = 0},
eM22 = {value = 1}
};
// HDC needed, we use a bitmap
using(Bitmap b = new Bitmap(1,1))
using (Graphics g = Graphics.FromImage(b))
{
IntPtr hdc = g.GetHdc();
IntPtr prev = SelectObject(hdc, font.ToHfont());
uint retVal = GetGlyphOutline(
/* handle to DC */ hdc,
/* the char/glyph */ letter,
/* format param */ GGO_METRICS,
/* glyph-metrics */ out metrics,
/* buffer, ignore */ 0,
/* buffer, ignore */ IntPtr.Zero,
/* trans-matrix */ ref matrix);
if(retVal == GDI_ERROR)
{
// something went wrong. Raise your own error here,
// or just silently ignore
return 0;
}
// return the height of the smallest rectangle containing the glyph
return metrics.gmBlackBoxY;
}
}
Can you update the question to include what you have tried ?
By dot glyph I assume you mean the punctuation mark detailed here ?
Is this Glyph height displayed on screen or a printed page ?
I managed to modify the first method in the link you posted in order to count the matching vertical pixels, however identifying the largest height of the glyph is fiddly to do unless you are willing to draw character by character, so this wasn't really a general working solution like the article.
In order to have a general working solution would need identify the largest single pixel vertical region of the character / glyph, then count the number of pixels in that region.
I also managed to verify that Graphics.MeasureString, TextRenderer.MeasureText and Graphics.MeasureCharacterRanges all returned the bounding box which gave a number similar to the font height.
The alternative to this is to Glyph.ActualHeight property which gets the rendered height of the framework element. This part of WPF and the related GlyphTypeface and GlyphRun classes. I wasn't able to test them at this time having only Mono.
The steps for getting Glyph.ActualHeight are as follows
Initialise the arguments for GlyphRun
Initialise GlyphRun object
Access relevant Glyph using glyphTypeface.CharacterToGlyphMap[text[n]] or more correctly glyphTypeface.GlyphIndices[n], where glyphTypeface is your GlyphTypeface, which is created from the Typeface object you make in Step 1.
Relevant resources on using them include
The Thing about Glyphs
GlyphRun and So Forth
Measuring Text
Glyphs Particularly the picture the bottom.
Futher references on GDI (What these classes use under the hood is GDI or GDI+) and Fonts in Windows include
GDI
Windows Font Mapping
Here's a solution involving WPF. We create an intermediate Geometry object in order to retrieve the accurate bounding box of our text. The advantage of this solution is that it does not actually render anything. Even if you don't use WPF for your interface, you may use this piece of code to do your measurements only, assuming the font rendering size would be the same in GDI, or close enough.
var fontFamily = new FontFamily("Arial");
var typeface = new Typeface(fontFamily, FontStyles.Normal, FontWeights.Normal, FontStretches.Normal);
var fontSize = 15;
var formattedText = new FormattedText(
"Hello World",
CultureInfo.CurrentCulture,
FlowDirection.LeftToRight,
typeface,
fontSize,
Brushes.Black);
var textGeometry = formattedText.BuildGeometry(new Point(0, 0));
double x = textGeometry.Bounds.Left;
double y = textGeometry.Bounds.Right;
double width = textGeometry.Bounds.Width;
double height = textGeometry.Bounds.Height;
Here, "Hello world" measurements are about 77 x 11 units. A single dot gives 1.5 x 1.5.
As an alternative solution, still in WPF, you could use GlyphRun and ComputeInkBoundingBox(). It's a bit more complex and won't support automatic font substitution, though. It would look like this:
var glyphRun = new GlyphRun(glyphTypeFace, 0, false, fontSize,
glyphIndexList,
new Point(0, 0),
advanceWidths,
null, null, null, null,
null, null);
Rect glyphInkBox = glyphRun.ComputeInkBoundingBox();

image processing techniques - direct manipulation of destination image or virtualized?

i need to re-project a series of ariel images that have been referenced in a geographical coordinate system into a UTM projection. I had read that using getPixel and setPixel might be slow - should set up a series of 2 dimensional arrays for intermediate access and then flush the values to the destination image when I am done.
Is this normally this sort of image processing is done by the professionals?
Most image processing is feature detection, segmentation of a scene, fault finding, classification and tracking ....
You might want to take a peek at the book:
Image Processing in C (applicable for other languages too)
Image Processing - Principles and Applications
Which describes many fast and effective means of many image transformations. These two books helped me when I was processing images :)
If I understand your question ... If you are re-aligning or assembling many images, and you don't have orientation as well as position, you can use these algorithms for re-alignment of edges and common features. If you are stitching by position then these algorithms will help in re-sampling/resizing your images for more efficient assembly. There are also some open source libraries for these kinds of things. (OpenCV comes to mind)
edit: If I were re-projecting large images into new projections based on position conversion (and it were dynamic, not static) I would look into building an on-demand application that will refactor images given required resolution and desired position. The application can then pull the nearest resolution of the relative neighbourhood images and provide a result at the desired resolution.
Without more background, I hope this helps!
edit 2:
Comment from answer below:
Depends on the images. If they are fixed size then an array might be good. If they vary then it might be better to implement a system that provides get/setpixel using relative sampling/averaging to match up images of differing res?
I don't know the ins and outs of the images you are working with, and what you are doing, but often abstracting what a 'pixel' is rather than accessing values in an array is useful. This way you can implement conversion, sampling, rotating, correcting algorithms on the backend. Like GetVPixel() or SetVPixel(). This may be more useful when working with multiple, differing res/format images. Like
SetVPixel(img1, coord1, GetVPixel(img2, coord2))
Obviously in an OOP/C# manner. img1 and img2 can be different in size, res, geographics, alignment or anything else providing your backend understands both.
If you don't mind using unsafe code, you can wrap the Bitmap's BitmapData in an object that allows you to efficiently get and set pixels. The below code is mostly taken from a gaussian blur filter, with a couple of modifications of my own. It's not the most flexible code if your bitmap formats differ but I hope it illustrates how you can manipulate bitmaps more efficiently.
public unsafe class RawBitmap : IDisposable
{
private BitmapData _bitmapData;
private byte* _begin;
public RawBitmap(Bitmap originBitmap)
{
OriginBitmap = originBitmap;
_bitmapData = OriginBitmap.LockBits(new Rectangle(0, 0, OriginBitmap.Width, OriginBitmap.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
_begin = (byte*)(void*)_bitmapData.Scan0;
}
#region IDisposable Members
public void Dispose()
{
OriginBitmap.UnlockBits(_bitmapData);
}
#endregion
public unsafe byte* Begin
{
get { return _begin; }
}
public unsafe byte* this[int x, int y]
{
get
{
return _begin + y * (_bitmapData.Stride) + x * 3;
}
}
public unsafe byte* this[int x, int y, int offset]
{
get
{
return _begin + y * (_bitmapData.Stride) + x * 3 + offset;
}
}
public unsafe void SetColor(int x, int y, Color color)
{
byte* p = this[x, y];
p[0] = color.B;
p[1] = color.G;
p[2] = color.R;
}
public unsafe Color GetColor(int x, int y)
{
byte* p = this[x, y];
return new Color
(
p[2],
p[1],
p[0]
);
}
public int Stride
{
get { return _bitmapData.Stride; }
}
public int Width
{
get { return _bitmapData.Width; }
}
public int Height
{
get { return _bitmapData.Height; }
}
public int GetOffset()
{
return _bitmapData.Stride - _bitmapData.Width * 3;
}
public Bitmap OriginBitmap { get; private set; }
}
The FreeImage library is pretty fast and offers a Cut and Paste that might be useful. The distribution comes with a C# wrapper.
AFAIK the overhead of GetPixel/SetPixel is the call to it, when accessing an array there is no call hence less overhead.
You should start with GetPixel/SetPixel, you can alway override those calls later to use direct data access.

Categories

Resources