How can I upload pixels from a simple byte array to an OpenGl texture ?
I'm using glTexImage2D and all I get is a white rectangle instead of a pixelated texture. The 9th parameter (32-bit pointer to the pixel data) is IMO the problem. I tried lots of parameter types there (byte, ref byte, byte[], ref byte[], int & IntPtr + Marshall, out byte, out byte[], byte*). glGetError() always returns GL_NO_ERROR. There must be something I'm doing wrong because it's never some gibberish pixels. It's always white. glGenTextures works correct. The first id has the value 1 like always in OpenGL. And I draw colored lines without any problem. So something is wrong with my texturing. I'm in control of the DllImport. So I can change the parameter types if necessary.
GL.glBindTexture(GL.GL_TEXTURE_2D, id);
int w = 4;
int h = 4;
byte[] bytes = new byte[w * h * 4];
for (int i = 0; i < bytes.Length; i++)
bytes[i] = (byte)Utils.random(256);
GL.glTexImage2D(GL.GL_TEXTURE_2D, 0, GL.GL_RGBA, w, h, 0, GL.GL_RGBA, GL.GL_UNSIGNED_BYTE, bytes);
[DllImport(GL_LIBRARY)] public static extern void glTexImage2D(uint what, int level, int internalFormat, int width, int height, int border, int format,
int type, byte[] bytes);
A common mistake is not change the MIN filter, since the default is mipmapped, which makes textures incomplete. Do this:
GL.glBindTexture(GL_TEXTURE_2D, id);
GL.glTexParameteri(GL_TEXTURE_2D, GL_MIN_FILTER, GL_NEAREST);
Then draw the texture.
A texture remaining white despite something has been uploaded is a indicator for either not all mipmap levels being uploaded properly, or filter settings not set correctly.
Of interest is then
glTexParameteri(GL_TEXTURE_..., GL_MIN_FILTER, GL_...);
With GL_NEAREST or GL_LINEAR to disable mipmapping. Mipmapping is enabled by default.
The other important thing is to set the structure of the data before uploading, i.e. calling glTexImage. For this you use the function glPixelStorei to set the GL_UNPACK_... parameters. You need to set things like alignment, stride, and so on. I refer you to the documentation.
Your P/Invoke declaration is wrong.
The short answer is:
[System.Runtime.InteropServices.DllImport(Library, EntryPoint = "glTexImage2D", ExactSpelling = true)]
internal extern static void glTexImage2D(int target, int level, int internalformat, Int32 width, Int32 height, int border, int format, int type, IntPtr pixels);
This P/Invoke declaration is a safe one (not using directly pointers, but IntPtr).
The problem is the .NET memory management. Memory blocks are not fixed on a certain memory address: the garbage collector (GC) is free to move memory everywhere (for example, it can move a stack allocated memory in the heap space and viceversa).
Indeed, what you need is to tell to the .NET GC that the memory shall not moved. To do so, you shall use the fixed statement or other garbage collector related method.
For example:
public static void TexImage2D(int target, int level, int internalformat, Int32 width, Int32 height, int border, int format, int type, object pixels) {
GCHandle pp_pixels = GCHandle.Alloc(pixels, GCHandleType.Pinned);
try {
if (Delegates.pglTexImage2D != null)
Delegates.pglTexImage2D(target, level, internalformat, width, height, border, format, type, pp_pixels.AddrOfPinnedObject());
else
throw new InvalidOperationException("binding point TexImage2D cannot be found");
} finally {
pp_pixels.Free();
}
}
The object parameter of the TexImage2D function is meant to be used with any array of data (those objects implementing the Array class (that is, byte[], short[], int[] and so on).
Essentially the code above tell to the GC: take the address of pixels, and don't move it untill I call Free() on the memory handle.
Usin the fixed statement is another option, but requires an unsafe P/Invoke declaration and it's a little more verbose to use it in the code (for each call you have to define an unsafe and fixed statements).
Related
I am porting an application from C# (WinForms) to C++ and noticed that drawing an image using GDI+ is much slower in C++, even though it uses the same API.
The image is loaded at application startup into a System.Drawing.Image or Gdiplus::Image, respectively.
The C# drawing code is (directly in the main form):
public Form1()
{
this.SetStyle(ControlStyles.UserPaint | ControlStyles.AllPaintingInWmPaint | ControlStyles.OptimizedDoubleBuffer, true);
this.image = Image.FromFile(...);
}
private readonly Image image;
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
var sw = Stopwatch.StartNew();
e.Graphics.TranslateTransform(this.translation.X, this.translation.Y); /* NOTE0 */
e.Graphics.DrawImage(this.image, 0, 0, this.image.Width, this.image.Height);
Debug.WriteLine(sw.Elapsed.TotalMilliseconds.ToString()); // ~3ms
}
Regarding SetStyle: AFAIK, these flags (1) make WndProc ignore WM_ERASEBKGND, and (2) allocate a temporary HDC and Graphics for double buffered drawing.
The C++ drawing code is more bloated.
I have browsed the reference source of System.Windows.Forms.Control to see how it handles HDC and how it implements double buffering.
As far as I can tell, my implementation matches that closely (see NOTE1) (note that I implemented it in C++ first and then looked at how it's in the .NET source -- I may have overlooked things).
The rest of the program is more or less what you get when you create a fresh Win32 project in VS2019. All error handling omitted for readability.
// In wWinMain:
Gdiplus::GdiplusStartupInput gdiplusStartupInput;
Gdiplus::GdiplusStartup(&gdiplusToken, &gdiplusStartupInput, NULL);
gdip_bitmap = Gdiplus::Image::FromFile(...);
// In the WndProc callback:
case WM_PAINT:
// Need this for the back buffer bitmap
RECT client_rect;
GetClientRect(hWnd, &client_rect);
int client_width = client_rect.right - client_rect.left;
int client_height = client_rect.bottom - client_rect.top;
// Double buffering
HDC hdc0 = BeginPaint(hWnd, &ps);
HDC hdc = CreateCompatibleDC(hdc0);
HBITMAP back_buffer = CreateCompatibleBitmap(hdc0, client_width, client_height); /* NOTE1 */
HBITMAP dummy_buffer = (HBITMAP)SelectObject(hdc, back_buffer);
// Create GDI+ stuff on top of HDC
Gdiplus::Graphics *graphics = Gdiplus::Graphics::FromHDC(hdc);
QueryPerformanceCounter(...);
graphics->DrawImage(gdip_bitmap, 0, 0, bitmap_width, bitmap_height);
/* print performance counter diff */ // -> ~27 ms typically
delete graphics;
// Double buffering
BitBlt(hdc0, 0, 0, client_width, client_height, hdc, 0, 0, SRCCOPY);
SelectObject(hdc, dummy_buffer);
DeleteObject(back_buffer);
DeleteDC(hdc); // This is the temporary double buffer HDC
EndPaint(hWnd, &ps);
/* NOTE1 */: In the .NET source code they don't use CreateCompatibleBitmap, but CreateDIBSection instead.
That improves performance from 27 ms to 21 ms and is very cumbersome (see below).
In both cases I am calling Control.Invalidate or InvalidateRect, respectively, when the mouse moves (OnMouseMove, WM_MOUSEMOVE). The goal is to implement panning with the mouse using SetTransform - that's irrelevant for now as long as draw performance is bad.
NOTE2: https://stackoverflow.com/a/1617930/653473
This answer suggests that using Gdiplus::CachedBitmap is the trick. However, I can find no evidence in the C# WinForms source code that it makes use of cached bitmaps in any way - the C# code uses GdipDrawImageRectI which maps to GdipDrawImageRectI, which maps to Graphics::DrawImage(IN Image* image, IN INT x, IN INT y, IN INT width, IN INT height).
Regarding /* NOTE1 */, here is the replacement for CreateCompatibleBitmap (just substitute CreateVeryCompatibleBitmap):
bool bFillBitmapInfo(HDC hdc, BITMAPINFO *pbmi)
{
HBITMAP hbm = NULL;
bool bRet = false;
// Create a dummy bitmap from which we can query color format info about the device surface.
hbm = CreateCompatibleBitmap(hdc, 1, 1);
pbmi->bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
// Call first time to fill in BITMAPINFO header.
GetDIBits(hdc, hbm, 0, 0, NULL, pbmi, DIB_RGB_COLORS);
if ( pbmi->bmiHeader.biBitCount <= 8 ) {
// UNSUPPORTED
} else {
if ( pbmi->bmiHeader.biCompression == BI_BITFIELDS ) {
// Call a second time to get the color masks.
// It's a GetDIBits Win32 "feature".
GetDIBits(hdc, hbm, 0, pbmi->bmiHeader.biHeight, NULL, pbmi, DIB_RGB_COLORS);
}
bRet = true;
}
if (hbm != NULL) {
DeleteObject(hbm);
hbm = NULL;
}
return bRet;
}
HBITMAP CreateVeryCompatibleBitmap(HDC hdc, int width, int height)
{
BITMAPINFO *pbmi = (BITMAPINFO *)LocalAlloc(LMEM_ZEROINIT, 4096); // Because otherwise I would have to figure out the actual size of the color table at the end; whatever...
bFillBitmapInfo(hdc, pbmi);
pbmi->bmiHeader.biWidth = width;
pbmi->bmiHeader.biHeight = height;
if (pbmi->bmiHeader.biCompression == BI_RGB) {
pbmi->bmiHeader.biSizeImage = 0;
} else {
if ( pbmi->bmiHeader.biBitCount == 16 )
pbmi->bmiHeader.biSizeImage = width * height * 2;
else if ( pbmi->bmiHeader.biBitCount == 32 )
pbmi->bmiHeader.biSizeImage = width * height * 4;
else
pbmi->bmiHeader.biSizeImage = 0;
}
pbmi->bmiHeader.biClrUsed = 0;
pbmi->bmiHeader.biClrImportant = 0;
void *dummy;
HBITMAP back_buffer = CreateDIBSection(hdc, pbmi, DIB_RGB_COLORS, &dummy, NULL, 0);
LocalFree(pbmi);
return back_buffer;
}
Using a very compatible bitmap as the back buffer improves performance from 27 ms to 21 ms.
Regarding /* NOTE0 */ in the C# code -- the code is only fast if the transformation matrix doesn't scale. C# performance drops slightly when upscaling (~9ms), and drops significantly (~22ms) when downsampling.
This hints to: DrawImage probably wants to BitBlt if possible. But it can't in my C++ case because the Bitmap format (that was loaded from disk) is different from the back buffer format or something.
If I create a new more compatible bitmap (this time no clear difference between CreateCompatibleBitmap and CreateVeryCompatibleBitmap), and then draw the original bitmap onto that, and then only use the more compatible bitmap in the DrawImage call, then performance increases to about 4.5 ms. It also has the same performance characteristics when scaling now as the C# code.
if (better_bitmap == NULL)
{
HBITMAP tmp_bitmap = CreateVeryCompatibleBitmap(hdc0, gdip_bitmap->GetWidth(), gdip_bitmap->GetHeight());
HDC copy_hdc = CreateCompatibleDC(hdc0);
HGDIOBJ old = SelectObject(copy_hdc, tmp_bitmap);
Gdiplus::Graphics *copy_graphics = Gdiplus::Graphics::FromHDC(copy_hdc);
copy_graphics->DrawImage(gdip_bitmap, 0, 0, gdip_bitmap->GetWidth(), gdip_bitmap->GetHeight());
// Now tmp_bitmap contains the image, hopefully in the device's preferred format
delete copy_graphics;
SelectObject(copy_hdc, old);
DeleteDC(copy_hdc);
better_bitmap = Gdiplus::Bitmap::FromHBITMAP(tmp_bitmap, NULL);
}
BUT it's still consistently slower, there must be something missing still. And it raises a new question: Why is this not necessary in C# (same image and same machine)? Image.FromFile does not convert the bitmap format on loading as far as I can tell.
Why is the DrawImage call in the C++ code still slower, and what do I need to do to make it as fast as in C#?
I ended up replicating more of the .NET code insanity.
The magic call that makes it go fast is GdipImageForceValidation in System.Drawing.Image.FromFile. This function is basically not documented at all, and it is not even [officially] callable from C++. It is merely mentioned here: https://learn.microsoft.com/en-us/windows/win32/gdiplus/-gdiplus-image-flat
Gdiplus::Image::FromFile and GdipLoadImageFromFile don't actually load the full image into memory. It effectively gets copied from the disk every time it is being drawn. GdipImageForceValidation forces the image to be loaded into memory, or so it seems...
My initial idea of copying the image into a more compatible bitmap was on the right track, but the way I did it does not yield the best performance for GDI+ (because I used a GDI bitmap from the original HDC). Loading the image directly into a new GDI+ bitmap, regardless of pixel format, yields the same performance characteristics as seen in the C# implementation:
better_bitmap = new Gdiplus::Bitmap(gdip_bitmap->GetWidth(), gdip_bitmap->GetHeight(), PixelFormat24bppRGB);
Gdiplus::Graphics *graphics = Gdiplus::Graphics::FromImage(better_bitmap);
graphics->DrawImage(gdip_bitmap, 0, 0, gdip_bitmap->GetWidth(), gdip_bitmap->GetHeight());
delete graphics;
Even better yet, using PixelFormat32bppPARGB further improves performance substantially - the premultiplied alpha pays off when the image is repeatedly drawn (regardless of whether the source image has an alpha channel).
It seems calling GdipImageForceValidation effectively does something similar internally, although I don't know what it really does. Because Microsoft made it as impossible as they could to call the GDI+ flat API from C++ user code, I just modified Gdiplus::Image in my Windows SDK headers to include an appropriate method. Copying the bitmap explicitly to PARGB seems cleaner to me (and yields better performance).
Of course, after one finds out which undocumented function to use, google would also give some additional information: https://photosauce.net/blog/post/image-scaling-with-gdi-part-5-push-vs-pull-and-image-validation
GDI+ is not my favorite API.
This code gets different scaling depending on which computer I run it on.
Metafile image;
IntPtr dib;
var memoryHdc = Win32Utils.CreateMemoryHdc(IntPtr.Zero, 1, 1, out dib);
try
{
image = new Metafile(memoryHdc, EmfType.EmfOnly);
using (var g = Graphics.FromImage(image))
{
Render(g, html, left, top, maxWidth, cssData, stylesheetLoad, imageLoad);
}
}
finally
{
Win32Utils.ReleaseMemoryHdc(memoryHdc, dib);
}
Going into the Render method, the Metafile object has a PixelFormat of DontCare and consequently does not have valid vertical or horizontal resolutions.
Coming out of the Render method, it has a value of Format32bppRgb and PhysicalDimension.Width and PhysicalDimension.Height have increased to accommodate the rendered image.
How can I make scaling independent of local settings?
Here's the implementation of CreateMemoryHdc (I didn't write it, it's from an OSS library).
public static IntPtr CreateMemoryHdc(IntPtr hdc, int width, int height, out IntPtr dib)
{
// Create a memory DC so we can work off-screen
IntPtr memoryHdc = CreateCompatibleDC(hdc);
SetBkMode(memoryHdc, 1);
// Create a device-independent bitmap and select it into our DC
var info = new BitMapInfo();
info.biSize = Marshal.SizeOf(info);
info.biWidth = width;
info.biHeight = -height;
info.biPlanes = 1;
info.biBitCount = 32;
info.biCompression = 0; // BI_RGB
IntPtr ppvBits;
dib = CreateDIBSection(hdc, ref info, 0, out ppvBits, IntPtr.Zero, 0);
SelectObject(memoryHdc, dib);
return memoryHdc;
}
As you can see, the width, height and bit depth passed to the DC constructor are constant. Creating the metafile produces different physical dimensions. Right after executing this
image = new Metafile(memoryHdc, EmfType.EmfOnly);
the metafile has PhysicalDimension.Height (and width) of 26.43 on my workstation and 31.25 on the server to which I am deploying, so the difference in scaling is already evident and therefore probably not a consequence of anything in the rendering.
This may be relevant. BitMapInfo is defined in the OSS library and looks like this:
internal struct BitMapInfo
{
public int biSize;
public int biWidth;
public int biHeight;
public short biPlanes;
public short biBitCount;
public int biCompression;
public int biSizeImage;
public int biXPelsPerMeter;
public int biYPelsPerMeter;
public int biClrUsed;
public int biClrImportant;
public byte bmiColors_rgbBlue;
public byte bmiColors_rgbGreen;
public byte bmiColors_rgbRed;
public byte bmiColors_rgbReserved;
}
so possibly setting biXPelsPerMeter and biYPelsPerMeter will help. The above code doesn't set them and may be allowing platform values.
Unfortunately, setting these values doesn't seem to make any difference. msdn says
biXPelsPerMeter
The horizontal resolution, in pixels-per-meter, of the
target device for the bitmap. An application can use this value to
select a bitmap from a resource group that best matches the
characteristics of the current device.
So these settings are used when loading a bitmap from a resource. No help here.
This all looks pertinent https://www.codeproject.com/articles/177394/%2fArticles%2f177394%2fWorking-with-Metafile-Images-in-NET
It may help to know that this code does not run in an application. It renders HTML as a metafile for printing, and it lives inside a Web API webservice.
There is no user interface so I'm not sure how to interpret the question of whether it is DPI Aware. The evidence suggests it's DPI affected so the question may be pertinent.
GDI doesn't scale. Use GDI+ for device independence. You will lose antialiasing but most print devices are high DPI anyway.
Does the library in use have an option to use GDI+ instead?
(In my own case, yes. Problem solved.)
I have a C++ DLL with unmanaged code and a C# UI. There's a function imported from C++ DLL that takes a written-by-me struct as parameter.
After marshalling the written-by-me struct (MyImage) from C# to C++ I can access the content of the int[] array inside of it, but the content is different. I do not know what I am missing here as I spent quite some time and tried a few tricks to resolve this (obviously not enough).
MyImage struct in C#:
[StructLayout(LayoutKind.Sequential)]
struct MyImage
{
public int width;
public int height;
public int[] bits; //these represent colors of image - 4 bytes for each pixel
}
MyImage struct in C++:
struct MyImage
{
int width;
int height;
Color* bits; //typedef unsigned int Color;
MyImage(int w, int h)
{
bits = new Color[w*h];
}
Color GetPixel(int x, int y)
{
if (x or y out of image bounds) return UNDEFINED_COLOR;
return bits[y*width+x];
}
}
C# function declaration with MyImage as parameter:
[DLLImport("G_DLL.dll")]
public static extern void DisplayImageInPolygon(Point[] p, int n, MyImage texture,
int tex_x0, int tex_y0);
C++ implementation
DLLEXPORT void __stdcall DisplayImageInPolygon(Point *p, int n, MyImage img,
int imgx0, int imgy0)
{
//And below they have improper values (i don't know where they come from)
Color test1 = img.GetPixel(0,0);
Color test2 = img.GetPixel(1,0);
}
So when debugging the problem I noticed that the MyImage.bits array in c++ struct holds different data.
How can I fix it?
Since the bits field is a pointer to memory allocated in the native code, you are going to need to declare it as IntPtr in the C# code.
struct MyImage
{
public int width;
public int height;
public IntPtr bits;
}
If you want to access individual pixels in the C# code you'll need to write a GetPixel method, just as you did in the C++ code.
Note that since the bits field is a pointer to memory allocated in the native code, I'd expect the actual code to have a destructor for the struct that calls delete[] bits. Otherwise your code will leak.
This also means that you are going to need to create and destroy instances in the native code, and never do so in the managed code. Is this the policy you currently follow? I suspect not based on the code that I can see here.
You also need to reconsider passing the struct by value. Do you really want to take a copy of it when you call that function? Doing so means you've got two instances of the struct whose bits fields both point to the same memory. But, which one owns that memory? This structure really needs to be passed by reference.
I think you've got some problems in your design, but I can't see enough of the code, or know enough about your problem to be able to give you concrete advice.
In comments you state that your main goal is to transfer these bits from your C# code to the C++ code. I suggest you do it like this:
MyImage* NewImage(int w, int h, Color* bits)
{
MyImage* img = new MyImage;
img->w = w;
img->h = h;
img->bits = new Color[w*h];
for (int i=0; i<w*h; i++)
img->bits[i] = bits[i];
return img;
}
void DeleteImage(MyImage* img)
{
delete[] img->bits;
delete img;
}
void DoSomethingWithImage(MyImage* img)
{
// do whatever it is you need to do
}
On the C# side you can declare it like this:
[DllImport(#"dllname.dll", CallingConvention=CallingConvention.Cdecl)]
static extern IntPtr NewImage(int w, int h, int[] bits);
[DllImport(#"dllname.dll", CallingConvention=CallingConvention.Cdecl)]
static extern void DeleteImage(ImtPtr img);
[DllImport(#"dllname.dll", CallingConvention=CallingConvention.Cdecl)]
static extern void DoSomethingWithImage(ImtPtr img);
The first thing you should try is declaring your C# code with unsigned int types as well. It is possible that one bit is being interpreted as a sign for your int.
So in C# something like this (just note the bits is now uint[]):
[StructLayout(LayoutKind.Sequential)]
struct MyImage
{
public int width;
public int height;
public uint[] bits; //these represent colors of image - 4 bytes for each pixel
}
You can use the PInvoke Interop Assistant. You simply paste your struct and function declaration and it will generate the C# code for you. It has helped me a lot quite a few times.
I have searched a lot and tried much but I can not find the proper solution.
I wonder is there any approach for determining exact glyph height in specified font?
I mean here when I want to determine the height of DOT glyph I should receive small height but not height with paddings or the font size.
I have found the solution for determining exact glyph width here (I have used the second approach) but it does not work for height.
UPDATE: I need solution for .NET 1.1
It's not that hard to get the character metrics. GDI contains a function GetGlyphOutline that you can call with the GGO_METRICS constant to get the height and width of the smallest enclosing rectangle required to contain the glyph when rendered. I.e, a 10 point glyph for a dot in font Arial will give a rectangle of 1x1 pixels, and for the letter I 95x.14 if the font is 100 points in size.
These are the declaration for the P/Invoke calls:
// the declarations
public struct FIXED
{
public short fract;
public short value;
}
public struct MAT2
{
[MarshalAs(UnmanagedType.Struct)] public FIXED eM11;
[MarshalAs(UnmanagedType.Struct)] public FIXED eM12;
[MarshalAs(UnmanagedType.Struct)] public FIXED eM21;
[MarshalAs(UnmanagedType.Struct)] public FIXED eM22;
}
[StructLayout(LayoutKind.Sequential)]
public struct POINT
{
public int x;
public int y;
}
[StructLayout(LayoutKind.Sequential)]
public struct POINTFX
{
[MarshalAs(UnmanagedType.Struct)] public FIXED x;
[MarshalAs(UnmanagedType.Struct)] public FIXED y;
}
[StructLayout(LayoutKind.Sequential)]
public struct GLYPHMETRICS
{
public int gmBlackBoxX;
public int gmBlackBoxY;
[MarshalAs(UnmanagedType.Struct)] public POINT gmptGlyphOrigin;
[MarshalAs(UnmanagedType.Struct)] public POINTFX gmptfxGlyphOrigin;
public short gmCellIncX;
public short gmCellIncY;
}
private const int GGO_METRICS = 0;
private const uint GDI_ERROR = 0xFFFFFFFF;
[DllImport("gdi32.dll")]
static extern uint GetGlyphOutline(IntPtr hdc, uint uChar, uint uFormat,
out GLYPHMETRICS lpgm, uint cbBuffer, IntPtr lpvBuffer, ref MAT2 lpmat2);
[DllImport("gdi32.dll", ExactSpelling = true, PreserveSig = true, SetLastError = true)]
static extern IntPtr SelectObject(IntPtr hdc, IntPtr hgdiobj);
The actual code, rather trivial, if you don't consider the P/Invoke redundancies. I tested the code, it works (you can adjust for getting the width as well from GLYPHMETRICS).
Note: this is ad-hoc code, in the real world, you should clean up the HDC's and objects with ReleaseHandle and DeleteObject. Thanks to a comment by user2173353 to point this out.
// if you want exact metrics, use a high font size and divide the result
// otherwise, the resulting rectangle is rounded to nearest int
private int GetGlyphHeight(char letter, string fontName, float fontPointSize)
{
// init the font. Probably better to do this outside this function for performance
Font font = new Font(new FontFamily(fontName), fontPointSize);
GLYPHMETRICS metrics;
// identity matrix, required
MAT2 matrix = new MAT2
{
eM11 = {value = 1},
eM12 = {value = 0},
eM21 = {value = 0},
eM22 = {value = 1}
};
// HDC needed, we use a bitmap
using(Bitmap b = new Bitmap(1,1))
using (Graphics g = Graphics.FromImage(b))
{
IntPtr hdc = g.GetHdc();
IntPtr prev = SelectObject(hdc, font.ToHfont());
uint retVal = GetGlyphOutline(
/* handle to DC */ hdc,
/* the char/glyph */ letter,
/* format param */ GGO_METRICS,
/* glyph-metrics */ out metrics,
/* buffer, ignore */ 0,
/* buffer, ignore */ IntPtr.Zero,
/* trans-matrix */ ref matrix);
if(retVal == GDI_ERROR)
{
// something went wrong. Raise your own error here,
// or just silently ignore
return 0;
}
// return the height of the smallest rectangle containing the glyph
return metrics.gmBlackBoxY;
}
}
Can you update the question to include what you have tried ?
By dot glyph I assume you mean the punctuation mark detailed here ?
Is this Glyph height displayed on screen or a printed page ?
I managed to modify the first method in the link you posted in order to count the matching vertical pixels, however identifying the largest height of the glyph is fiddly to do unless you are willing to draw character by character, so this wasn't really a general working solution like the article.
In order to have a general working solution would need identify the largest single pixel vertical region of the character / glyph, then count the number of pixels in that region.
I also managed to verify that Graphics.MeasureString, TextRenderer.MeasureText and Graphics.MeasureCharacterRanges all returned the bounding box which gave a number similar to the font height.
The alternative to this is to Glyph.ActualHeight property which gets the rendered height of the framework element. This part of WPF and the related GlyphTypeface and GlyphRun classes. I wasn't able to test them at this time having only Mono.
The steps for getting Glyph.ActualHeight are as follows
Initialise the arguments for GlyphRun
Initialise GlyphRun object
Access relevant Glyph using glyphTypeface.CharacterToGlyphMap[text[n]] or more correctly glyphTypeface.GlyphIndices[n], where glyphTypeface is your GlyphTypeface, which is created from the Typeface object you make in Step 1.
Relevant resources on using them include
The Thing about Glyphs
GlyphRun and So Forth
Measuring Text
Glyphs Particularly the picture the bottom.
Futher references on GDI (What these classes use under the hood is GDI or GDI+) and Fonts in Windows include
GDI
Windows Font Mapping
Here's a solution involving WPF. We create an intermediate Geometry object in order to retrieve the accurate bounding box of our text. The advantage of this solution is that it does not actually render anything. Even if you don't use WPF for your interface, you may use this piece of code to do your measurements only, assuming the font rendering size would be the same in GDI, or close enough.
var fontFamily = new FontFamily("Arial");
var typeface = new Typeface(fontFamily, FontStyles.Normal, FontWeights.Normal, FontStretches.Normal);
var fontSize = 15;
var formattedText = new FormattedText(
"Hello World",
CultureInfo.CurrentCulture,
FlowDirection.LeftToRight,
typeface,
fontSize,
Brushes.Black);
var textGeometry = formattedText.BuildGeometry(new Point(0, 0));
double x = textGeometry.Bounds.Left;
double y = textGeometry.Bounds.Right;
double width = textGeometry.Bounds.Width;
double height = textGeometry.Bounds.Height;
Here, "Hello world" measurements are about 77 x 11 units. A single dot gives 1.5 x 1.5.
As an alternative solution, still in WPF, you could use GlyphRun and ComputeInkBoundingBox(). It's a bit more complex and won't support automatic font substitution, though. It would look like this:
var glyphRun = new GlyphRun(glyphTypeFace, 0, false, fontSize,
glyphIndexList,
new Point(0, 0),
advanceWidths,
null, null, null, null,
null, null);
Rect glyphInkBox = glyphRun.ComputeInkBoundingBox();
i need to re-project a series of ariel images that have been referenced in a geographical coordinate system into a UTM projection. I had read that using getPixel and setPixel might be slow - should set up a series of 2 dimensional arrays for intermediate access and then flush the values to the destination image when I am done.
Is this normally this sort of image processing is done by the professionals?
Most image processing is feature detection, segmentation of a scene, fault finding, classification and tracking ....
You might want to take a peek at the book:
Image Processing in C (applicable for other languages too)
Image Processing - Principles and Applications
Which describes many fast and effective means of many image transformations. These two books helped me when I was processing images :)
If I understand your question ... If you are re-aligning or assembling many images, and you don't have orientation as well as position, you can use these algorithms for re-alignment of edges and common features. If you are stitching by position then these algorithms will help in re-sampling/resizing your images for more efficient assembly. There are also some open source libraries for these kinds of things. (OpenCV comes to mind)
edit: If I were re-projecting large images into new projections based on position conversion (and it were dynamic, not static) I would look into building an on-demand application that will refactor images given required resolution and desired position. The application can then pull the nearest resolution of the relative neighbourhood images and provide a result at the desired resolution.
Without more background, I hope this helps!
edit 2:
Comment from answer below:
Depends on the images. If they are fixed size then an array might be good. If they vary then it might be better to implement a system that provides get/setpixel using relative sampling/averaging to match up images of differing res?
I don't know the ins and outs of the images you are working with, and what you are doing, but often abstracting what a 'pixel' is rather than accessing values in an array is useful. This way you can implement conversion, sampling, rotating, correcting algorithms on the backend. Like GetVPixel() or SetVPixel(). This may be more useful when working with multiple, differing res/format images. Like
SetVPixel(img1, coord1, GetVPixel(img2, coord2))
Obviously in an OOP/C# manner. img1 and img2 can be different in size, res, geographics, alignment or anything else providing your backend understands both.
If you don't mind using unsafe code, you can wrap the Bitmap's BitmapData in an object that allows you to efficiently get and set pixels. The below code is mostly taken from a gaussian blur filter, with a couple of modifications of my own. It's not the most flexible code if your bitmap formats differ but I hope it illustrates how you can manipulate bitmaps more efficiently.
public unsafe class RawBitmap : IDisposable
{
private BitmapData _bitmapData;
private byte* _begin;
public RawBitmap(Bitmap originBitmap)
{
OriginBitmap = originBitmap;
_bitmapData = OriginBitmap.LockBits(new Rectangle(0, 0, OriginBitmap.Width, OriginBitmap.Height), ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
_begin = (byte*)(void*)_bitmapData.Scan0;
}
#region IDisposable Members
public void Dispose()
{
OriginBitmap.UnlockBits(_bitmapData);
}
#endregion
public unsafe byte* Begin
{
get { return _begin; }
}
public unsafe byte* this[int x, int y]
{
get
{
return _begin + y * (_bitmapData.Stride) + x * 3;
}
}
public unsafe byte* this[int x, int y, int offset]
{
get
{
return _begin + y * (_bitmapData.Stride) + x * 3 + offset;
}
}
public unsafe void SetColor(int x, int y, Color color)
{
byte* p = this[x, y];
p[0] = color.B;
p[1] = color.G;
p[2] = color.R;
}
public unsafe Color GetColor(int x, int y)
{
byte* p = this[x, y];
return new Color
(
p[2],
p[1],
p[0]
);
}
public int Stride
{
get { return _bitmapData.Stride; }
}
public int Width
{
get { return _bitmapData.Width; }
}
public int Height
{
get { return _bitmapData.Height; }
}
public int GetOffset()
{
return _bitmapData.Stride - _bitmapData.Width * 3;
}
public Bitmap OriginBitmap { get; private set; }
}
The FreeImage library is pretty fast and offers a Cut and Paste that might be useful. The distribution comes with a C# wrapper.
AFAIK the overhead of GetPixel/SetPixel is the call to it, when accessing an array there is no call hence less overhead.
You should start with GetPixel/SetPixel, you can alway override those calls later to use direct data access.