Why does a bitmap compare not equal to itself? - c#

This is a bit puzzling here. The following code is part of a little testing application to verify that code changes didn't introduce a regression. To make it fast we used memcmp which appears to be the fastest way of comparing two images of equal size (unsurprisingly).
However, we have a few test images that exhibit a rather surprising problem: memcmp on the bitmap data tells us that they are not equal, however, a pixel-by-pixel comparison doesn't find any difference at all. I was under the impression that when using LockBits on a Bitmap you get the actual raw bytes of the image. For a 24 bpp bitmap it's a bit hard to imagine a condition where the pixels are the same but the underlying pixel data isn't.
A few surprising things:
The differences are always single bytes that are 00 in one image and FF in the other.
If one changes the PixelFormat for LockBits to Format32bppRgb or Format32bppArgb, the comparison succeeds.
If one passes the BitmapData returned by the first LockBits call as 4th argument to the second one, the comparison succeeds.
As noted above, the pixel-by-pixel comparison succeeds as well.
I'm a bit stumped here because frankly I cannot imagine why this happens.
(Reduced) Code below. Just compile with csc /unsafe and pass a 24bpp PNG image as first argument.
using System;
using System.Drawing;
using System.Drawing.Imaging;
using System.Runtime.InteropServices;
class Program
{
public static void Main(string[] args)
{
Bitmap title = new Bitmap(args[0]);
Console.WriteLine(CompareImageResult(title, new Bitmap(title)));
}
private static string CompareImageResult(Bitmap bmp, Bitmap expected)
{
string retval = "";
unsafe
{
var rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
var resultData = bmp.LockBits(rect, ImageLockMode.ReadOnly, bmp.PixelFormat);
var expectedData = expected.LockBits(rect, ImageLockMode.ReadOnly, expected.PixelFormat);
try
{
if (memcmp(resultData.Scan0, expectedData.Scan0, resultData.Stride * resultData.Height) != 0)
retval += "Bitmap data did not match\n";
}
finally
{
bmp.UnlockBits(resultData);
expected.UnlockBits(expectedData);
}
}
for (var x = 0; x < bmp.Width; x++)
for (var y = 0; y < bmp.Height; y++)
if (bmp.GetPixel(x, y) != expected.GetPixel(x, y))
{
Console.WriteLine("Pixel diff at {0}, {1}: {2} - {3}", x, y, bmp.GetPixel(x, y), expected.GetPixel(x, y));
retval += "pixel fail";
}
return retval != "" ? retval : "success";
}
[DllImport("msvcrt.dll", CallingConvention = CallingConvention.Cdecl)]
static extern int memcmp(IntPtr b1, IntPtr b2, long count);
}

Take a look at this, which pictorially illustrates a LockBits buffer - it shows the Rows of Strides and where Padding can appear at the end of the Stride (if it's needed).
https://web.archive.org/web/20141229164101/http://bobpowell.net/lockingbits.aspx
http://supercomputingblog.com/graphics/using-lockbits-in-gdi/
A stride is probably aligned to the 32bit (i.e. word) boundary (for efficiency purposes)...and the extra unused space at the end of the stride is to make the next Stride be aligned.
So that's what's giving you the random behaviour during the comparison...spurious data in the Padding region.
When you are using Format32bppRgb and Format32bppArgb that's naturally word aligned, so I guess you don't have any extra unused bits on the end, which is why it works.

Just an educated guess:
24 bits (3 bytes) is a little bit awkward on 32/64 bit hardware.
With this format there are bound to be buffers that are flushed out to a multiple of 4 bytes, leaving 1 or more bytes as 'don't care' . They can contain random data and the software doesn't feel obliged to zero them out. This will make memcmp fail.

Related

Performant method of drawing text onto a png file?

I need to draw a two-dimensional grid of Squares with centered Text on them onto a (transparent) PNG file.
The tiles need to have a sufficiently big resolution, so that the text does not get pixaleted to much.
For testing purposes I create a 2048x2048px 32-bit (transparency) PNG Image with 128x128px tiles like for example that one:
The problem is I need to do this with reasonable performance. All methods I have tried so far took more than 100ms to complete, while I would need this to be at a max < 10ms. Apart from that I would need the program generating these images to be Cross-Platform and support WebAssembly (but even if you have for example an idea how to do this using posix threads, etc. I would gladly take that as a starting point, too).
Net5 Implementation
using System.Diagnostics;
using System;
using System.Drawing;
namespace ImageGeneratorBenchmark
{
class Program
{
static int rowColCount = 16;
static int tileSize = 128;
static void Main(string[] args)
{
var watch = Stopwatch.StartNew();
Bitmap bitmap = new Bitmap(rowColCount * tileSize, rowColCount * tileSize);
Graphics graphics = Graphics.FromImage(bitmap);
Brush[] usedBrushes = { Brushes.Blue, Brushes.Red, Brushes.Green, Brushes.Orange, Brushes.Yellow };
int totalCount = rowColCount * rowColCount;
Random random = new Random();
StringFormat format = new StringFormat();
format.LineAlignment = StringAlignment.Center;
format.Alignment = StringAlignment.Center;
for (int i = 0; i < totalCount; i++)
{
int x = i % rowColCount * tileSize;
int y = i / rowColCount * tileSize;
graphics.FillRectangle(usedBrushes[random.Next(0, usedBrushes.Length)], x, y, tileSize, tileSize);
graphics.DrawString(i.ToString(), SystemFonts.DefaultFont, Brushes.Black, x + tileSize / 2, y + tileSize / 2, format);
}
bitmap.Save("Test.png");
watch.Stop();
Console.WriteLine($"Output took {watch.ElapsedMilliseconds} ms.");
}
}
}
This takes around 115ms on my machine. I am using the System.Drawing.Common nuget here.
Saving the bitmap takes roughly 55ms and drawing to the graphics object in the loop also takes roughly 60ms, while 40ms can be attributed to drawing the text.
Rust Implementation
use std::path::Path;
use std::time::Instant;
use image::{Rgba, RgbaImage};
use imageproc::{drawing::{draw_text_mut, draw_filled_rect_mut, text_size}, rect::Rect};
use rusttype::{Font, Scale};
use rand::Rng;
#[derive(Default)]
struct TextureAtlas {
segment_size: u16, // The side length of the tile
row_col_count: u8, // The amount of tiles in horizontal and vertical direction
current_segment: u32 // Points to the next segment, that will be used
}
fn main() {
let before = Instant::now();
let mut atlas = TextureAtlas {
segment_size: 128,
row_col_count: 16,
..Default::default()
};
let path = Path::new("test.png");
let colors = vec![Rgba([132u8, 132u8, 132u8, 255u8]), Rgba([132u8, 255u8, 32u8, 120u8]), Rgba([200u8, 255u8, 132u8, 255u8]), Rgba([255u8, 0u8, 0u8, 255u8])];
let mut image = RgbaImage::new(2048, 2048);
let font = Vec::from(include_bytes!("../assets/DejaVuSans.ttf") as &[u8]);
let font = Font::try_from_vec(font).unwrap();
let font_size = 40.0;
let scale = Scale {
x: font_size,
y: font_size,
};
// Draw random color rects for benchmarking
for i in 0..256 {
let rand_num = rand::thread_rng().gen_range(0..colors.len());
draw_filled_rect_mut(
&mut image,
Rect::at((atlas.current_segment as i32 % atlas.row_col_count as i32) * atlas.segment_size as i32, (atlas.current_segment as i32 / atlas.row_col_count as i32) * atlas.segment_size as i32)
.of_size(atlas.segment_size.into(), atlas.segment_size.into()),
colors[rand_num]);
let number = i.to_string();
//let text = &number[..];
let text = number.as_str(); // Somehow this conversion takes ~15ms here for 255 iterations, whereas it should normally only be less than 1us
let (w, h) = text_size(scale, &font, text);
draw_text_mut(
&mut image,
Rgba([0u8, 0u8, 0u8, 255u8]),
(atlas.current_segment % atlas.row_col_count as u32) * atlas.segment_size as u32 + atlas.segment_size as u32 / 2 - w as u32 / 2,
(atlas.current_segment / atlas.row_col_count as u32) * atlas.segment_size as u32 + atlas.segment_size as u32 / 2 - h as u32 / 2,
scale,
&font,
text);
atlas.current_segment += 1;
}
image.save(path).unwrap();
println!("Output took {:?}", before.elapsed());
}
For Rust I was using the imageproc crate. Previously I used the piet-common crate, but the output took more than 300ms. With the imageproc crate I got around 110ms in release mode, which is on par with the C# version, but I think it will perform better with webassembly.
When I used a static string instead of converting the number from the loop (see comment) I got below 100ms execution time. For Rust drawing to the image only takes around 30ms, but saving it takes 80ms.
C++ Implementation
#include <iostream>
#include <cstdlib>
#define cimg_display 0
#define cimg_use_png
#include "CImg.h"
#include <chrono>
#include <string>
using namespace cimg_library;
using namespace std;
/* Generate random numbers in an inclusive range. */
int random(int min, int max)
{
static bool first = true;
if (first)
{
srand(time(NULL));
first = false;
}
return min + rand() % ((max + 1) - min);
}
int main() {
auto t1 = std::chrono::high_resolution_clock::now();
static int tile_size = 128;
static int row_col_count = 16;
// Create 2048x2048px image.
CImg<unsigned char> image(tile_size*row_col_count, tile_size*row_col_count, 1, 3);
// Make some colours.
unsigned char cyan[] = { 0, 255, 255 };
unsigned char black[] = { 0, 0, 0 };
unsigned char yellow[] = { 255, 255, 0 };
unsigned char red[] = { 255, 0, 0 };
unsigned char green[] = { 0, 255, 0 };
unsigned char orange[] = { 255, 165, 0 };
unsigned char colors [] = { // This is terrible, but I don't now C++ very well.
cyan[0], cyan[1], cyan[2],
yellow[0], yellow[1], yellow[2],
red[0], red[1], red[2],
green[0], green[1], green[2],
orange[0], orange[1], orange[2],
};
int total_count = row_col_count * row_col_count;
for (size_t i = 0; i < total_count; i++)
{
int x = i % row_col_count * tile_size;
int y = i / row_col_count * tile_size;
int random_color_index = random(0, 4);
unsigned char current_color [] = { colors[random_color_index * 3], colors[random_color_index * 3 + 1], colors[random_color_index * 3 + 2] };
image.draw_rectangle(x, y, x + tile_size, y + tile_size, current_color, 1.0); // Force use of transparency. -> Does not work. Always outputs 24bit PNGs.
auto s = std::to_string(i);
CImg<unsigned char> imgtext;
unsigned char color = 1;
imgtext.draw_text(0, 0, s.c_str(), &color, 0, 1, 40); // Measure the text by drawing to an empty instance, so that the bounding box will be set automatically.
image.draw_text(x + tile_size / 2 - imgtext.width() / 2, y + tile_size / 2 - imgtext.height() / 2, s.c_str(), black, 0, 1, 40);
}
// Save result image as PNG (libpng and GraphicsMagick are required).
image.save_png("Test.png");
auto t2 = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count();
std::cout << "Output took " << duration << "ms.";
getchar();
}
I also reimplemented the same program in C++ using CImg. For .png output libpng and GraphicsMagick are required, too. I am not very fluent in C++ and I did not even bother optimizing, because the save operation took ~200ms in Release mode, whereas the whole Image generation which is currently very unoptimized took only 30ms. So this solution also falls way short of my goal.
Where I am right now
A graph of where I am right now. I will update this when I make some progress.
Why I am trying to do this and why it bothers me so much
I was asked in the comments to give a bit more context. I know this question is getting a big bloated, but if you are interested read on...
So basically I need to build a Texture Atlas for a .gltf file. I need to generate a .gltf file from data and the primitives in the .gltf file will be assigned a texture based on the input data, too. In order to optimize for a small amount of draw calls I am putting as much geometry as possible into one single primitive and then use texture coordinates to map the texture to the model. Now GPUs have a maximum size, that the texture can have. I will use 2048x2048 pixels, because the majority of devices supports at least that. That means, that if I have more than 256 objects, I need to add a new primitive to the .gltf and generate another texture atlas. In some cases one texture atlas might be sufficient, in other cases I need up to 15-20.
The textures will have a (semi-)transparent background, maybe text and maybe some lines / hatches or simple symbols, that can be drawn with a path.
I have the whole system set up in Rust already and the .gltf generating is really efficient: I can generate 54000 vertecies (=1500 boxes for example) in about 10ms which is a common case. Now for this I need to generate 6 texture atlases, which is not really a problem on a multi-core system (7 threads one for the .gltf, six for the textures). The problem is generating one takes about 100ms (or now 55 ms) which makes the whole process more than 5 times slower.
Unfortunatly it gets even worse, because another common case is 15000 objects. Generating the vertecies (plus a lot of custom attributes actually) and assembling the .gltf still only takes 96ms (540000 Vertecies / 20MB .gltf), but in that time I need to generate 59 texture atlases. I am working on a 8-core System, so at that point it gets impossible for me to run them all in parallel and I will have to generate ~9 atlases per thread (which means 55ms*9 = 495ms) so again this is 5 times as much and actually creates a quite noticeable lag. In reality it currently takes more than 2.5 s, because I am have updated to use the faster code and there seems to be additional slowdown.
What I need to do
I do understand that it will take some time to write out 4194304 32-bit pixels. But as far as I can see, because I am only writing to different parts of the image (for example only to the upper tile and so on) it should be possible to build a program that does this using multiple threads. That is what I would like to try and I would take any hint on how to make my Rust program run faster.
If it helps I would also be willing to rewrite this in C or any other language, that can be compiled to wasm and can be called via Rust's FFI. So if you have suggestions for more performant libraries I would be very thankful for that too.
Edit
Update 1: I made all the suggested improvements for the C# version from the comments. Thanks for all of them. It is now at 115ms and almost exactly as fast as the Rust version, which makes me believe I am sort of hitting a dead-end there and I would really need to find a way to parallize this in order to make significant further improvements...
Update 2: Thanks to #pinkfloydx33 I was able to run the binary with around 60ms (including the first run) after publishing it with dotnet publish -p:PublishReadyToRun=true --runtime win10-x64 --configuration Release.
In the meantime I also tried other methods myself, namely Python with Pillow (~400ms), C# and Rust both with Skia (~314ms and ~260ms) and I also reimplemented the program in C++ using CImg (and libpng as well as GraphicsMagick).
I was able to get all of the drawing (creating the grid and the text) down to 4-5ms by:
Caching values where possible (Random, StringFormat, Math.Pow)
Using ArrayPool for scratch buffer
Using the DrawString overload accepting a StringFormat with the following options:
Alignment and LineAlignment for centering (in lieu of manually calculating)
FormatFlags and Trimming options that disable things like overflow/wrapping since we are just writing small numbers (this had an impact, though negligible)
Using a custom Font from the GenericMonospace font family instead of SystemFonts.DefaultFont
This shaved off ~15ms
Fiddling with various Graphics options, such as TextRenderingHint and SmoothingMode
I got varying results so you may want to fiddle some more
An array of Color and the ToArgb function to create an int representing the 4x bytes of the pixel's color
Using LockBits, (semi-)unsafe code and Span to
Fill a buffer representing 1px high and size * countpx wide (the entire image width) with the int representing the ARGB values of the random colors
Copy that buffer size times (now representing an entire square in height)
Rinse/Repeat
unsafe was required to create a Span<> from the locked bit's Scan0 pointer
Finally, using GDI/native to draw the text over the graphic
I was then able to shave a little bit of time off of the actual saving process by using the Image.Save(Stream) overload. I used a FileStream with a custom buffer-size of 16kb (over the default 4kb) which seemed to be the sweet spot. This brought the total end-to-end time down to around 40ms (on my machine).
private static readonly Random Random = new();
private static readonly Color[] UsedColors = { Color.Blue, Color.Red, Color.Green, Color.Orange, Color.Yellow };
private static readonly StringFormat Format = new()
{
Alignment = StringAlignment.Center,
LineAlignment = StringAlignment.Center,
FormatFlags = StringFormatFlags.NoWrap | StringFormatFlags.FitBlackBox | StringFormatFlags.NoClip,
Trimming = StringTrimming.None, HotkeyPrefix = HotkeyPrefix.None
};
private static unsafe void DrawGrid(int count, int size, bool save)
{
var intsPerRow = size * count;
var sizePerFullRow = intsPerRow * size;
var colorsLen = UsedColors.Length;
using var bitmap = new Bitmap(intsPerRow, intsPerRow, PixelFormat.Format32bppArgb);
var bmpData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.WriteOnly, PixelFormat.Format32bppArgb);
var byteSpan = new Span<byte>(bmpData.Scan0.ToPointer(), Math.Abs(bmpData.Stride) * bmpData.Height);
var intSpan = MemoryMarshal.Cast<byte, int>(byteSpan);
var arr = ArrayPool<int>.Shared.Rent(intsPerRow);
var buff = arr.AsSpan(0, intsPerRow);
for (int y = 0, offset = 0; y < count; ++y)
{
// fill buffer with an entire 1px row of colors
for (var bOffset = 0; bOffset < intsPerRow; bOffset += size)
buff.Slice(bOffset, size).Fill(UsedColors[Random.Next(0, colorsLen)].ToArgb());
// duplicate the pixel high row until we've created a row of squares in full
var len = offset + sizePerFullRow;
for ( ; offset < len; offset += intsPerRow)
buff.CopyTo(intSpan.Slice(offset, intsPerRow));
}
ArrayPool<int>.Shared.Return(arr);
bitmap.UnlockBits(bmpData);
using var graphics = Graphics.FromImage(bitmap);
graphics.TextRenderingHint = TextRenderingHint.ClearTypeGridFit;
// some or all of these may not even matter?
// you may try removing/modifying the rest
graphics.CompositingQuality = CompositingQuality.HighSpeed;
graphics.InterpolationMode = InterpolationMode.Default;
graphics.SmoothingMode = SmoothingMode.HighSpeed;
graphics.PixelOffsetMode = PixelOffsetMode.HighSpeed;
var font = new Font(FontFamily.GenericMonospace, 14, FontStyle.Regular);
var lenSquares = count * count;
for (var i = 0; i < lenSquares; ++i)
{
var x = i % count * size;
var y = i / count * size;
var rect = new Rectangle(x, y, size, size);
graphics.DrawString(i.ToString(), font, Brushes.Black, rect, Format);
}
if (save)
{
using var fs = new FileStream("Test.png", FileMode.Create, FileAccess.Write, FileShare.Write, 16 * 1024);
bitmap.Save(fs, ImageFormat.Png);
}
}
Here are the timings (in ms) using a StopWatch in Release mode, run outside of Visual Studio. At least the first 1 or 2 timings should be ignored since the methods aren't fully jitted yet. Your mileage will vary depending on your PC, etc.
Image generation only:
Elapsed: 38
Elapsed: 6
Elapsed: 4
Elapsed: 4
Elapsed: 4
Elapsed: 4
Elapsed: 5
Elapsed: 4
Elapsed: 5
Elapsed: 4
Elapsed: 4
Image Generation and saving:
Elapsed: 95
Elapsed: 48
Elapsed: 41
Elapsed: 40
Elapsed: 37
Elapsed: 42
Elapsed: 42
Elapsed: 39
Elapsed: 38
Elapsed: 40
Elapsed: 41
I don't think there is anything that can be done about the slow save. I reviewed the source code of Image.Save. It calls into Native/GDI, passing in a Handle to the Stream, the native image pointer and the Guid representing PNG's ImageCodecInfo (encoder). Any slowness is going to be on that end. Update: I have verified that you get the same slow speed when saving to a MemoryStream so this has nothing to do with the fact you are saving to a file and everything to do with what's going on behind the scenes with GDI/native.
I also attempted to get the Image drawing down further using direct unsafe (pointers) and/or tricks with Unsafe and MemoryMarshal (ex. CopyBlock) as well as unrolling the loops. Those methods either produced identical results or worse and made things a bit harder to follow.
Note: Publishing as a console application with PublishReadyToRun=true seems to help a bit as well.
Update
I realize that the above is just an example, so this may not apply to your end goal. Upon further, extensive review I found that the bulk of the time spent is actually part of Image::Save. It doesn't matter what type of Stream we are saving to, even MemoryStream exhibits the same slowness (obviously disregarding file I/O). I am confident this is related to having GDI objects in the Image/Graphics--in our case the text from DrawString.
As a "simple" test I updated the above so that drawing of the text happened on a secondary image of all white. Without saving that image, I then looped over its individual pixels and based on the rough color (since we have aliasing to deal with) I manually set the corresponding pixel on the primary bitmap. The entire end to end process took sub 20ms on my machine. The rendered image wasn't perfect since it was a quick test, but it proves that you can do parts of this manually and still achieve really low times. The problem is the text drawing but we can leverage GDI without actually using it in our final image. You just need to find the sweet spot. I also tried using an indexed format and populating the pallette with colors beforehand also appeared to help some. Anyways, just food for thought.

Convert 12-bit Monochrome Image to 8-bit Grayscale

I have an image sensor board for embedded development for which I need to capture a stream of images and output them in 8-bit monochrome / grayscale format. The imager output is 12-bit monochrome (which takes 2 bytes per pixel).
In the code, I have an IntPtr to a memory buffer that has the 12-bit image data, from which I have to extract and convert that data down to an 8-bit image. This is represented in memory something like this (with a bright light activating the pixels):
As you can see, every second byte contains the LSB that I want to discard, thereby keeping only the odd-numbered bytes (to put it another way). The best solution I can conceptualize is to iterate through the memory, but that's the rub. I can't get that to work. What I need help with is an algorithm in C# to do this.
Here's a sample image that represents a direct creation of a Bitmap object from the IntPtr as follows:
bitmap = new Bitmap(imageWidth, imageHeight, imageWidth, PixelFormat.Format8bppIndexed, pImage);
// Failed Attempt #1
unsafe
{
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
int i = 0, imageSize = (imageWidth * imageHeight * 2); // two bytes per pixel
byte[] imageData = new byte[imageSize];
do
{
// Should I bitwise shift?
imageData[i] = (byte)(pImage + i) << 8; // Doesn't compile, need help here!
} while (i++ < imageSize);
}
// Failed Attempt #2
IntPtr pImage; // pointer to buffer containing 12-bit image data from imager
imageSize = imageWidth * imageHeight;
byte[] imageData = new byte[imageSize];
Marshal.Copy(pImage, imageData, 0, imageSize);
// I tried with and without this loop. Neither gives me images.
for (int i = 0; i < imageData.Length; i++)
{
if (0 == i % 2) imageData[i / 2] = imageData[i];
}
Bitmap bitmap;
using (var ms = new MemoryStream(imageData))
{
bitmap = new Bitmap(ms);
}
// This also introduced a memory leak somewhere.
Alternatively, if there's a way to do this with a Bitmap, byte[], MemoryStream, etc. that works, I'm all ears, but everything I've tried has failed.
Here is the algorithm that my coworkers helped formulate. It creates two new (unmanaged) pointers; one 8-bits wide and the other 16-bits.
By stepping through one word at a time and shifting off the last 4 bits of the source, we get a new 8-bit image with only the MSBs. Each buffer has the same number of words, but since the words are different sizes, they progress at different rates as we iterate over them.
unsafe
{
byte* p_bytebuffer = (byte*)pImage;
short* p_shortbuffer = (short*)pImage;
for (int i = 0; i < imageWidth * imageHeight; i++)
{
*p_bytebuffer++ = (byte)(*p_shortbuffer++ >> 4);
}
}
In terms of performance, this appears to be very fast with no perceivable difference in framerate.
Special thanks to #Herohtar for spending a substantial amount of time in chat with me attempting to help me solve this.

Problem getting the image row where all pixels are white

I am trying to find if the image is clipped from the bottom and if it is, then I will divide it in two images from the last white pixel row. Following are the simple methods I created to check clipping and get the empty white pixel rows. Also, as you can see this is not a very good solution. This might cause performance issues for larger images. So if anyone can suggest me better ways then it will be a great help:
private static bool IsImageBottomClipping(Bitmap image)
{
for (int i = 0; i < image.Width; i++)
{
var pixel = image.GetPixel(i, image.Height - 1);
if (pixel.ToArgb() != Color.White.ToArgb())
{
return true;
}
}
return false;
}
private static int GetLastWhiteLine(Bitmap image)
{
for (int i = image.Height - 1; i >= 0; i--)
{
int whitePixels = 0;
for (int j = 0; j < image.Width; j++)
{
var pixel = image.GetPixel(j, i);
if (pixel.ToArgb() == Color.White.ToArgb())
{
whitePixels = j + 1;
}
}
if (whitePixels == image.Width)
return i;
}
return -1;
}
IsImageBottomClipping is working fine. But other method is not sending correct white pixel row. It is only sending one less row. Example image:
In this case, row around 180 should be the return value of GetLastWhiteLine method. But it is returning 192.
All right, so... we got two of subjects to tackle here. First, the optimising, then, your bug. I'll start with the optimising.
The fastest way is to work in memory directly, but, honestly, it's kind of unwieldy. The second-best choice, which is what I generally use, is to copy the raw image data bytes out of the image object. This will make you end up with four vital pieces of data:
The width, which you can just get from the image.
The height, which you can just get from the image.
The byte array, containing the image bytes.
The stride, which gives you the amount of bytes used for each line on the image.
(Technically, there's a fifth one, namely the pixel format, but we'll just force things to 32bpp here so we don't have to take that into account along the way.)
Note that the stride, technically, is not just the amount of bytes used per pixel multiplied by the image width. It is rounded up to the next multiple of 4 bytes. When working with 32-bit ARGB content, this isn't really an issue, since 32-bit is 4 bytes, but in general, it's better to use the stride and not just the multiplied width, and write all code assuming there could be padded bytes behind each line. You'll thank me if you're ever processing 24-bit RGB content with this kind of system.
However, when going over the image's content you obviously should only check the exact range that contains pixel data, and not the full stride.
The way to get these things is quite simple: use LockBits on the image, tell it to expose the image as 32 bit per pixel ARGB data (it will actually convert it if needed), get the line stride, and use Marshal.Copy to copy the entire image contents into a byte array.
Int32 width = image.Width;
Int32 height = image.Height;
BitmapData sourceData = image.LockBits(new Rectangle(0, 0, width, height), ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
Int32 stride = sourceData.Stride;
Byte[] data = new Byte[stride * height];
Marshal.Copy(sourceData.Scan0, data, 0, data.Length);
image.UnlockBits(sourceData);
As mentioned, this is forced to 32-bit ARGB format. If you would want to use this system to get the data out in the original format it has inside the image, just change PixelFormat.Format32bppArgb to image.PixelFormat.
Now, you have to realise, LockBits is a rather heavy operation, which copies the data out, in the requested pixel format, to new memory, where it can be read or (if not specified as read-only as I did here) edited. What makes this more optimal than your method is, quite simply, that GetPixel performs a LockBits operation every time you request a single pixel value. So you're cutting down the amount of LockBits calls from several thousands to just one.
Anyway, now, as for your functions.
The first method is, in my opinion, completely unnecessary; you should just run the second one on any image you get. Its output gives you the last white line of the image, so if that value equals height-1 you're done, and if it doesn't, you immediately have the value needed for the further processing. The first function does exactly the same as the second, after all; it checks if all pixels on a line are white. The only difference is that it only processes the last line.
So, onto the second method. This is where things go wrong. You set the amount of white pixels to the "current pixel index plus one", rather than incrementing it to check if all pixels matched, meaning the method goes over all pixels but only really checks if the last pixel on the row was white. Since your image indeed has a white pixel at the end of the last row, it aborts after one row.
Also, whenever you find a pixel that does not match, you should just abort the scan of that line immediately, like your first method does; there's no point in continuing on that line after that.
So, let's fix that second function, and rewrite it to work with that set of "byte array", "stride", "width" and "height", rather than an image. I added the "white" colour as parameter too, to make it more reusable, so it's changed from GetLastWhiteLine to GetLastClearLine.
One general usability note: if you are iterating over the height and width, do actually call your loop variables y and x; it makes things a lot more clear in your code.
I explained the used systems in the code comments.
private static Int32 GetLastClearLine(Byte[] sourceData, Int32 stride, Int32 width, Int32 height, Color checkColor)
{
// Get color as UInt32 in advance.
UInt32 checkColVal = (UInt32)checkColor.ToArgb();
// Use MemoryStream with BinaryReader since it can read UInt32 from a byte array directly.
using (MemoryStream ms = new MemoryStream(sourceData))
using (BinaryReader sr = new BinaryReader(ms))
{
for (Int32 y = height - 1; y >= 0; --y)
{
// Set position in the memory stream to the start of the current row.
ms.Position = stride * y;
Int32 matchingPixels = 0;
// Read UInt32 pixels for the whole row length.
for (Int32 x = 0; x < width; ++x)
{
// Read a UInt32 for one whole 32bpp ARGB pixel.
UInt32 colorVal = sr.ReadUInt32();
// Compare with check value.
if (colorVal == checkColVal)
matchingPixels++;
else
break;
}
// Test if full line matched the given color.
if (matchingPixels == width)
return y;
}
}
return -1;
}
This can be simplified, though; the loop variable x already contains the value you need, so if you simply declare it before the loop, you can check after the loop what value it had when the loop stopped, and there is no need to increment a second variable. And, honestly, the value read from the stream can be compared directly, without the colorVal variable. Making the contents of the y-loop:
{
ms.Position = stride * y;
Int32 x;
for (x = 0; x < width; ++x)
if (sr.ReadUInt32() != checkColVal)
break;
if (x == width)
return y;
}
For your example image, this gets me value 178, which is correct when I check in Gimp.

C# Convert or compare int to (unsafe) byte*

Original Scenario
I massively misunderstood my own code, and this scenario is invalid.
This is way out of my normal wheelhouse, so I'm going to explain best
I can.
I have a user-set color code. Example:
int R = 255;
int G = 255;
int B = 255;
And I have a lot of large images where I need to check the color of
pixels at certain sets of coordinates against the user-set color. I
can successfully get the byte* of any pixel in an image, and get the
values I expect.
I do this using BitmapData from Bitmap.LockBits(...). My
understanding is that locking will be important to performance
reasons. There will be a great many instances of this being used
across a very large collections of images, so performance is a major
consideration.
For those same performance reasons I'm trying to avoid converting the
retrieved pixel-colors represented by unsafe bytes to integers - I'd
much rather convert my int to a byte one time and use that for the
likely millions of pixels this will be run against each time it is
invoked.
However... I cannot figure out how to get any of my user-set integers
into an unsafe byte (byte*) and compare it to the unsafe byte
retrieved from a pixel.
The unsafe byte (byte*) was the 8-bit pointer of the data of the pixel (at least, that's how I understand it) but I am getting the individual colors as regular old bytes.
byte* pixel = //value here pulled from image;
pixel[2] //red value byte
pixel[1] //green value byte
pixel[0] //blue value byte
So I don't need to convert my ints to unsafe bytes ...pointers?..., just a simple Converter.ToByte(myInt).
The real question
But since I think this is still possibly a valid question outside my scenario, I'm going to leave this part up for someone to answer and hopefully help someone in the future:
How do you take any given int in C# and compare it to an "unsafe byte" pointer 'byte*'?
You would just want to dereference the byte pointer and compare it to the integer.
unsafe void Main()
{
byte x = 15;
int y = 15;
Console.WriteLine(AreEqual(&x, y)); // True
}
public unsafe bool AreEqual(byte* bytePtr, int val) {
var byteVal = *bytePtr;
return byteVal == val;
}
Let us open a open a bitmap and process each pixel
//Note this has several overloads, including a path to an image
//Use the proper one for yourself
Bitmap b = new Bitmap(_image);
//Lock(and Load baby)
BitmapData bData = b.LockBits(new Rectangle(0, 0, _image.Width, _image.Height), ImageLockMode.ReadWrite, b.PixelFormat);
//Bits per pixel, obviously
byte bitsPerPixel = Image.GetPixelFormatSize(bitmap.PixelFormat);
//Gets the address of the first pixel data in the bitmap.
//This can also be thought of as the first scan line in the bitmap.
byte* scan0 = (byte*)bData.Scan0.ToPointer();
for (int i = 0; i < bData.Height; ++i)
{
for (int j = 0; j < bData.Width; ++j)
{
byte* data = scan0 + i * bData.Stride + j * bitsPerPixel / 8;
//data is a pointer to the first byte of the 3-byte color data
//Do your magic here, compare your RGB values here
byte R = *b; //Dereferencing pointer here
byte G = *(b+1);
byte B = *(b+2);
}
}
//Unlocking here is important or memoryleak
b.UnlockBits(bData);

Why do my images seem to be in the format of Bgra instead of Argb?

So, I am very confused over a quick test that I just ran. I am doing some image processing in C#. Get/SetPixel() have proven to be too slow, so I am using LockBits to get at the raw data.
However, I seem to have hit a situation which I can't figure out. While scanning the image, it seems that each pixel is laid out as Bgra, that is, blue byte, green byte, red byte, and alpha, in that order. I was under the impression that they would be laid out in Argb order. here is a sample of the code that I am using.
BitmapData baseData =
m_baseImage.LockBits(new Rectangle(new Point(0, 0), m_baseImage.Size),
ImageLockMode.ReadOnly, PixelFormat.Format32bppArgb);
Bitmap test = new Bitmap(m_baseImage.Width, m_baseImage.Height);
byte* ptr = (byte*)baseData.Scan0;
for (int y = 0; y < m_baseImage.Height; ++y)
{
for (int x = 0; x < m_baseImage.Width; ++x)
{
// this works, image is copied correctly
Color c1 = Color.FromArgb(*(ptr + 3), *(ptr + 2), *(ptr + 1), *ptr);
// below does not work! Bytes are reversed.
//Color c1 = Color.FromArgb(*ptr, *(ptr + 1), *(ptr + 2), *(ptr + 3));
test.SetPixel(x, y, c1);
ptr += 4;
}
}
m_baseImage.UnlockBits(baseData);
pictureBox1.Image = m_baseImage;
pictureBox2.Image = test;
The first line which grabs the color of the base image works, the second does not. I am pretty sure that I am missing something very obvious here.
Not only are the colors reversed BGRA, but the rows are reversed as well - the bottom of the image is the first in memory. It's just the way Windows has always worked.
The little-endian explanation seems obvious, but I don't think it's the truth. If you look at the definition of COLORREF in the Windows API, you'll notice that Red is the low order byte and Blue is the higher order; if you stored this as a single integer value, it would be RGB0.
ARGB refers to the byte order in words fetched as words. If you fetch the bytes one at a time, you'll receive em low to hi as IBM PC's are little-endian

Categories

Resources