I'm using the ImageSharp library (available on NuGet as 1.0.0-beta0001) for image generation and manipulation in .NET Core 2.0, and have encountered something that I can't seem to find a way around.
Using this example as a basis, I'm trying to round the corners of a white image. What I'm finding is that the transparent IPath corners being punched out end up antialiasing dark, as if the transparent "color" were black or gray (whereas it really shouldn't be considered any color at all).
Here's the upper-right quadrant of the image to demonstrate what I mean:
I tried all the options for PixelBlenderMode at this part of the code and none have produced what I'm after:
img.Mutate(x => x.Fill(Rgba32.Transparent, corners, new GraphicsOptions(true)
{
BlenderMode = PixelBlenderMode.Src // enforces that any part of this shape that has color is punched out of the background
}));
The problem you are seeing is the consequence of incorrect pixel sampling. Pixels with low transparency were affecting the average value of the individual components too much.
Quoting some of the text from below link as an example as it explains the issues well.
A pixel in the white area has a color of RGBA(1.00, 1.00, 1.00, 1.00). A pixel in the red area has color RGBA(1.00, 0.00, 0.00, 0.10). If you average those numbers together, you get (1.00, 0.50, 0.50, 0.55). That’s the color a typical pixel on the boundary would have if you resize the image in the straightforward way.
But that color – a half-opaque light red – is the wrong color! You can see a faint red halo around the border, which shouldn’t be there:
http://entropymine.com/imageworsener/resizealpha/
The fix is simple, values must be premultiplied by their alpha component first before sampling.
However, I don't know whether this issue is happening due to resizing the generated image (fixed in the latest nightlies), or during the fill process. I would need to see sample code first to be sure as I do not know how you are providing the source image.
Related
I'm having difficulties with System.Drawing bugs in C#. I've been trying to diagnose it for 5 months.
My program basically does photoshop brushes. Example below.
It works well on images that are absolutely opaque. But, if the image has any transparency at all, it begins to cause strange errors. Consider this.
The gray border on every line, and highly visible in the second picture, is an alpha artifact. If I draw over the same region, I can get the color as dark as (127, 127, 127) in RGB, which is perfect gray, and I don't think it's a coincidence.
This error occurs when opening/closing the dialog, undo/redo, and drawing over transparent regions.
Anyway, I'd love to get help with fixing this GDI+ issue. I've been here:
Color value with alpha of zero shows up as black
GDI+ Bug: Anti-Aliasing White On Transparency
C# Resized images have black borders
Ghost-borders ('ringing') when resizing in GDI+
How to solve grayish frame issue when Scaling a bitmap using GDI+
DrawImage() function over WinForms does not work correctly
http://www.codeproject.com/Articles/14884/BorderBug
Possibility: there is a Border Bug where resized images include pixels outside the image, which are assumed Black, and they're involved in the calculation (resulting in blackened borders)
But that doesn't explain how copying and writing the image over the source causes this effect.
Possibility: creating a new bitmap automatically premultiplies each pixel by its alpha.
I've tried a tested workaround that locks the bits and copies them over (to avoid multiplying by alpha), but this doesn't solve my issue.
I've tried all the solutions presented here, which I summarize below:
Any combination of InterpolationMode.NearestNeighbor, SmoothingMode.None, and PixelOffsetMode.Half.
Clearing with a transparent color, Color.White, and Color.Black, along with the above attempts as well.
Workarounds for cloning bitmaps, sometimes in conjunction with other workarounds above.
Using an ImageAttributes object and setting WrapMode to TileFlipXY, sometimes in conjunction with above workarounds.
Using Color.ConvertFromPremultipliedAlpha and Color.ConvertToPremultipliedAlpha, with a definitely bad effect.
Here is the main file, which includes all the relevant code.
A tiny bit more discussion on my original post off-site.
I have a difficulty as I am trying to render a character with a specific font style to the bitmap image (black and white). My question is the font is basically black and white and I am writing the character in black (against white background), however when I convert it to bitmap image I get a coloured thin outline around the bindery of my character.
Can anyone tell me where that grey color comes from while I am writing it with black color and how can i get ONLY black and white pixels?
The pixels that aren't completely black or completely white are the result of anti-aliasing. Anti-aliasing is used by default since everyone who doesn't know about it probably wants it.
I suggest two alternatives. One, create your bitmap with a one bit per pixel format, which will not give anti-aliasing a chance. Second, you can go through the resulting image after the text has been drawn pixel by pixel and adjust each pixel to either black or white based on a threshold. I.e. if the picture is darker than half then it's black, otherwise it's white. e.g. if (red+green+blue > 383) set_pixel_white() else set_pixel_black(); But you'll need be ready for some rather funny results. You may need to play with the thresholds.
PS there's a better solution, you can tweak anti-aliasing. MSDN You'll set your rendering to System.Drawing.Text.TextRenderingHint.SingleBitPerPixel or something that suits you.
I have an image where I need to change the background colour (E.g. changing the background of the example image below to blue).
However, the image is anti-aliased so I cannot simply do a replace of the background colour with a different colour.
One way I have tried is creating a second image that is just the background and changing the colour of that and merging the two images into one, however this does not work as the border between the two images is fuzzy.
Is there any way to do this, or some other way to achieve this that I have no considered?
Example image
Just using GDI+
Image image = Image.FromFile("cloud.png");
Bitmap bmp = new Bitmap(image.Width, image.Height);
using (Graphics g = Graphics.FromImage(bmp)) {
g.Clear(Color.SkyBlue);
g.InterpolationMode = InterpolationMode.NearestNeighbor;
g.PixelOffsetMode = PixelOffsetMode.None;
g.DrawImage(image, Point.Empty);
}
resulted in:
Abstractly
Each pixel in your image is a (R, G, B) vector, where each component is in the range [0, 1]. You want a transform, T, that will convert all of the pixels in your image to a new (R', G', B') under the following constraints:
black should stay black
T(0, 0, 0) = (0, 0, 0)
white should become your chosen color C*
T(1, 1, 1) = C*
A straightforward way to do this is to choose the following transform T:
T(c) = C* .* c (where .* denotes element-wise multiplication)
This is just standard image multiplication.
Concretely
If you're not worried about performance, you can use the (very slow) methods GetPixel and SetPixel on your Bitmap to apply this transform for each pixel in it. If it's not clear how to do this, just say so in a comment and I'll add a detailed explanation for that part.
Comparison
Compare this to the method presented by LarsTech. The method presented here is on the top; the method presented by LarsTech is on the bottom. Notice the undesirable edge effects on the bottom icon (white haze on the edges).
And here is the image difference of the two:
Afterthought
If your source image has a transparent (i.e. transparent-white) background and black foreground (as in your example), then you can simply make your transform T(a, r, g, b) = (a, 0, 0, 0) then draw your image on top of whatever background color you want, as LarsTech suggested.
If it is a uniform colour you want to replace you could convert this to an alpha. I wouldn't like to code it myself!
You could use GIMP's Color To Alpha source code (It's GPL), here's a version of it
P.S. Not sure how to get the latest.
Background removal /replacement, IMO is more art than science, you’ll not find one algorithm fit all solution for this BUT depending on how desperate or interested you are in solving this problem, you may want to consider the following explanation:
Let’s assume you have a color image.
Use your choice of decoding mechanism and generate a gray scale / luminosity image of your color image.
Plot a graph (metaphorically speaking) of numeric value of the pixel(x) vs number of pixels in the image for that value(y). Aka. a luminosity histogram.
Now if your background is large enough (or small), you’d see a part of the graph representing the distribution of a range of pixels which constitute your background. You may want to select a slightly wider range to handle the anti-aliasing (based on a fixed offset that you define if you are dealing with similar images) and call it the luminosity range for your background.
It would make your life easier if you know at least one pixel (sample/median pixel value) out of the range of pixels which defines your background, that way you can ‘look up’ the part of the graph which defines your background.
Once you have the range of luminosity pixels for the background, you may run through the original image pixels, compare their luminosity values with the range you have, if it falls within, replace the pixel in the original image with the desired color, preferably luminosity shifted based on the original pixel and the sample pixel, so that the replaced background looks anti-aliased too.
This is not a perfect solution and there are a lot of scenarios where it might fail / partially fail, but again it would work for the sample image that you had attached with your question.
Also there are a lot of performance improvement opportunities, including GPGPU etc.
Another possible solution would be to use some of the pre-built third party image processing libraries, there are a few open source such as Camellia but I am not sure of what features are provided and how sophisticated they are.
I'm clearing an image with a transparent color (120 alpha) and then drawing a string onto it with a gradient, and then drawing that image onto a larger image but the text has blackish edge to it instead of being nice and smooth like it should be. The text looks fine if the background is drawn with 255 alpha.
120 Alpha: Image
255 Alpha: Image
As you can see, the text is much easier to read with the background fully opaque
Note: the green dot is my cursor
Edit: gfx.TextRenderingHint = System.Drawing.Text.TextRenderingHint.AntiAlias; removes the black edges but it's blurry, I'll try some other combinations of graphics settings and see how this goes.
Edit: gfx.TextRenderingHint = System.Drawing.Text.TextRenderingHint.AntiAliasGridFit; Looks much better, although the A's in the Arial font look a little funky
This is normal behaviour. You must change your drawing order to get it right.
Since you draw the text onto a semi-transparent surface its anti-aliasing pixels will be semi-transparent, too, but somwehre in-between the text color and the background of the first image.
Now, if you draw the result onto another image you will have uniform transparent pixels where no text is, no transparency where text is and varying tranparencies and colors for the antialiasing pixels.
Note that those will have various colors as the antialiasing tries to spread color diferences as well as differences in brightness.
Either write on a non-transparent surface or delay the writing to the end. (Or turn off all anti-aliasing. But that's not nice.)
I do NOT want the system trying to scale my drawing, I want to do it entirely on my own as any attempt to squeeze/stretch the graphics will produce ugly results. The problem is that as the image gets bigger I want to add more detail rather than have it simply scale up.
Right now I'm looking at two sets of stripes. One is black/white, the other is black/white/white. The pen width is set to 1.
When the line is drawn horizontally it's correct. The same logic drawing vertical lines appears to be doing some antialiasing, bleeding the black onto the nearby white. The black/white/white doesn't look as good as the horizontal, the black/white looks more like medium++ gray/medium-- gray.
The same code is generating the coordinates in all cases, the transform logic is simply selecting what offset to apply where as I am only supporting orientations on the cardinals. Since there's no floating point involved I can't be looking at precision issues.
How do I get the system to leave my graphics alone???
(Yeah, I realize this won't work at very high resolution and eventually I'll have to scale up the lines. Over any reasonable on-screen zoom factor this won't matter, for printer use I'll have to play with it and see where I need to scale. The basic problem is that I'm trying to shoehorn things into too few pixels without just making blobs.)
Edit: There is no scaling going on. I'm generating a bitmap the exact size of the target window. All lines are drawn at integer coordinates. The recommendation of setting SmoothingMode to None changes the situation: Now the black/white/white draws as a very clear gray/gray/white and the black/white draws as a solid gray box. Now that this is cleaned up I can see some individual vertical lines that were supposed to be black are actually doing the same thing of drawing as 2-pixel gray bars. It's like all my vertical lines are off by 1/2 pixel--yet every drawing command gets only integers.
Edit again: I've learned more about the problem. The image is being drawn correctly but trashed when displayed to the screen. (Saving it to disk and viewing it on the very same monitor shows it drawn correctly.)
You really should let the system manage it for you. You have described a certain behavior that is specific to the hardware you are using. Given different hardware, the problem may not exist at all, or it may exist horizontally but not vertically, or may only exist at much smaller or much larger resolutions, etc. etc.
The basic problem you described sounds like the vertical lines are being drawn "between" vertical stacks of pixels, which is causing the system to draw an anti-aliased line. The alternative to anti-aliasing the line is to shift it. The problem with that is the lines will "jitter" or "jerk" if the image is moved around, animated, or scaled or transformed in any other way. Generally, jerk is MUCH less desirable than anti-aliasing because it is more distracting.
You should be able to turn off anti-aliasing using the SmoothingMode enum, or you could try to handle positioning yourself. Either way, you are trading anti-aliasing for jittery, jerky rendering during any movement or transformation.
Have a look at System.Drawing.Drawing2d.SmoothingMode. Setting it to 'Default' or 'None' should turn off anti aliasing when doing line drawing. If you're talking about scaling an image without anti aliasing effects, have a look at InterpolationMode. Specifically, you might wish to set it to 'Nearest-Neighbor' which will keep your rectangular blocks perfectly crisp. Note that you will see some odd effects if you scale your image by anything other than whole numbers.
Perhaps you need to align your lines on half-pixel coordinates? A one pixel line drawn at say x = 5 would be drawn on the center of the line, which means it would go from x = 4.5 to x = 5.5. If you want it to go from x = 4 to x = 5 then you'd need to set its coordinate to x = 4.5.
GDI+ has a property: http://msdn.microsoft.com/en-us/library/system.drawing.graphics.pixeloffsetmode.aspx that allows you to control this behavior.
Sounds like you need to change your application to tell the system it is DPI aware so scaling doesn't occur. Here's an article on doing that: http://msdn.microsoft.com/en-us/library/ms701681%28VS.85%29.aspx