Setting an unmanaged image via int array - c#

I have an array of 8 bytes, in which I want to put it into a length 4 (downwards) by width 2 (rightwards) bitmap. My code:
unsafe
{
IntPtr pixels = Marshal.AllocHGlobal(8);
Marshal.WriteByte(pixels, 0 * Marshal.SizeOf(typeof(Byte)), 0);
Marshal.WriteByte(pixels, 1 * Marshal.SizeOf(typeof(Byte)), 255);
Marshal.WriteByte(pixels, 2 * Marshal.SizeOf(typeof(Byte)), 0);
Marshal.WriteByte(pixels, 3 * Marshal.SizeOf(typeof(Byte)), 255);
Marshal.WriteByte(pixels, 4 * Marshal.SizeOf(typeof(Byte)), 0);
Marshal.WriteByte(pixels, 5 * Marshal.SizeOf(typeof(Byte)), 0);
Marshal.WriteByte(pixels, 6 * Marshal.SizeOf(typeof(Byte)), 0);
Marshal.WriteByte(pixels, 7 * Marshal.SizeOf(typeof(Byte)), 0);
var newImage = new UnmanagedImage(pixels, 2, 4, 1, System.Drawing.Imaging.PixelFormat.Format8bppIndexed);
var myBM = newImage.ToManagedImage();
myBM.Save("outputBM.bmp", System.Drawing.Imaging.ImageFormat.Bmp);
Marshal.FreeHGlobal(pixels);
}
The issue is that for the first column, the image corresponds to my array:
However if I try to change:
Marshal.WriteByte(pixels, 4 * Marshal.SizeOf(typeof(Byte)), 0);
to
Marshal.WriteByte(pixels, 4 * Marshal.SizeOf(typeof(Byte)), 255);
I would expect the pixel in the 2nd column, 1st row to change to white. However this does not happen. Is there anything wrong with the code?

You have to set the stride constructor parameter value of the new UnmanagedImage() equal to 2.
stride: Image stride (line size in bytes).
By changing the following line:
var newImage = new UnmanagedImage(pixels, 2, 4, 2, System.Drawing.Imaging.PixelFormat.Format8bppIndexed);
the code will produce the following image:
When you change:
Marshal.WriteByte(pixels, 4 * Marshal.SizeOf(typeof(Byte)), 0);
to
Marshal.WriteByte(pixels, 4 * Marshal.SizeOf(typeof(Byte)), 255);
The image produced will look as follows:

Related

How to render instances using the new OperGL / OpenTk APIs

I'm trying to put together information from this tutorial (Advanced-OpenGL/Instancing) and these answers (How to render using 2 VBO) (New API Clarification) in order to render instances of a square, giving the model matrix for each instance to the shader through an ArrayBuffer. The code I ended with is the following. I sliced and tested any part, and the problem seems to be the model matrix itself is not passed correctly to the shader. I'm using OpenTK in Visual Studio.
For simplicity and debugging, the pool contains just a single square, so I don't still have divisor problems or other funny things I still don't cope with.
My vertex data arrays contain the 3 floats for position and 4 floats for color (stride = 7 time float size).
My results with the attached code are:
if I remove the imodel multiplication in the vertex shader, I get exactly what I expect, a red square (rendered as 2 triangles) with a green border (rendered as a line loop).
if I change the shader and I multiply by the model matrix, I get a red line above the center of the screen which is changing its length over time. The animation makes sense because the simulation is rotating the square, so the angle updates regularly and thus the model matrix calculated changes. Another great result because I'm actually sending dynamic data to the shader. Howvere I can't have my original square rotated and translated.
Any clue?
Thanks a lot.
Vertex Shader:
#version 430 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec4 aCol;
layout (location = 2) in mat4 imodel;
out vec4 fColor;
uniform mat4 view;
uniform mat4 projection;
void main() {
fColor = aCol;
gl_Position = vec4(aPos, 1.0) * imodel * view * projection;
}
Fragment Shader:
#version 430 core
in vec4 fColor;
out vec4 FragColor;
void main() {
FragColor = fColor;
}
OnLoad snippet (initialization):
InstanceVBO = GL.GenBuffer();
GL.GenBuffers(2, VBO);
GL.BindBuffer(BufferTarget.ArrayBuffer, VBO[0]);
GL.BufferData(BufferTarget.ArrayBuffer,
7 * LineLoopVertCount * sizeof(float),
LineLoopVertData, BufferUsageHint.StaticDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, VBO[1]);
GL.BufferData(BufferTarget.ArrayBuffer,
7 * TrianglesVertCount * sizeof(float),
TrianglesVertData, BufferUsageHint.StaticDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
// VAO SETUP
VAO = GL.GenVertexArray();
GL.BindVertexArray(VAO);
// Position
GL.EnableVertexAttribArray(0);
GL.VertexAttribFormat(0, 3, VertexAttribType.Float, false, 0);
GL.VertexArrayAttribBinding(VAO, 0, 0);
// COlor
GL.EnableVertexAttribArray(1);
GL.VertexAttribFormat(1, 4, VertexAttribType.Float, false, 3 * sizeof(float));
GL.VertexArrayAttribBinding(VAO, 1, 0);
int vec4Size = 4;
GL.EnableVertexAttribArray(2);
GL.VertexAttribFormat(2, 4, VertexAttribType.Float, false, 0 * vec4Size * sizeof(float));
GL.VertexAttribFormat(3, 4, VertexAttribType.Float, false, 1 * vec4Size * sizeof(float));
GL.VertexAttribFormat(4, 4, VertexAttribType.Float, false, 2 * vec4Size * sizeof(float));
GL.VertexAttribFormat(5, 4, VertexAttribType.Float, false, 3 * vec4Size * sizeof(float));
GL.VertexAttribDivisor(2, 1);
GL.VertexAttribDivisor(3, 1);
GL.VertexAttribDivisor(4, 1);
GL.VertexAttribDivisor(5, 1);
GL.VertexArrayAttribBinding(VAO, 2, 1);
GL.BindVertexArray(0);
OnFrameRender snippet:
shader.Use();
shader.SetMatrix4("view", cameraViewMatrix);
shader.SetMatrix4("projection", cameraProjectionMatrix);
int mat4Size = 16;
for (int i = 0; i < simulation.poolCount; i++)
{
modelMatrix[i] = Matrix4.CreateFromAxisAngle(
this.RotationAxis, simulation.pool[i].Angle);
modelMatrix[i] = matrix[i] * Matrix4.CreateTranslation(new Vector3(
simulation.pool[i].Position.X,
simulation.pool[i].Position.Y,
0f));
//modelMatrix[i] = Matrix4.Identity;
}
// Copy model matrices into the VBO
// ----------------------------------------
GL.BindBuffer(BufferTarget.ArrayBuffer, InstanceVBO);
GL.BufferData(BufferTarget.ArrayBuffer,
simulation.poolCount * mat4Size * sizeof(float),
modelMatrix, BufferUsageHint.DynamicDraw);
GL.BindBuffer(BufferTarget.ArrayBuffer, 0);
// ----------------------------------------
GL.BindVertexArray(VAO);
GL.BindVertexBuffer(1, InstanceVBO, IntPtr.Zero, mat4Size * sizeof(float));
GL.BindVertexBuffer(0, VBO[0], IntPtr.Zero, 7 * sizeof(float));
GL.DrawArraysInstanced(PrimitiveType.LineLoop, 0, LineLoopVertCount, simulation.poolCount);
GL.BindVertexBuffer(0, lifeFormVBO[1], IntPtr.Zero, lifeFormTrianglesFStride * sizeof(float));
GL.DrawArraysInstanced(PrimitiveType.Triangles, 0, TrianglesVertCount, simulation.poolCount);
GL.BindVertexArray(0);
There is a lot wrong here.
First, you don't enable any of the attribute arrays after 2, even though your shader says that you're reading 3-5 too. Similarly, you don't set the attribute binding for any of the arrays after 2.
But your bigger problem is that you use glVertexAttribDivisor. That's the wrong function for what you're trying to do. That's the old API for setting the divisor.
In separate attribute format, the divisor is part of the buffer binding, not the vertex attribute. So the divisor needs to be set with glVertexBindingDivisor, and the index it is given is the index you intend to bind the buffer to. Which should be 1.
So presumably, your code should look like:
int vec4Size = 4;
for(int ix = 0; ix < 4; ++ix)
{
int attribIx = 2 + ix;
GL.EnableVertexAttribArray(attribIx);
GL.VertexAttribFormat(attribIx, 4, VertexAttribType.Float, false, ix * vec4Size * sizeof(float));
GL.VertexArrayAttribBinding(VAO, attribIx, 1); //All use the same buffer binding
}
GL.VertexBindingDivisor(1, 1);
In opentk the matrices are sent by columns not by rows as usual so it is necessary to invert the rows by columns, you can do it like this.
private Matrix4[] TransposeMatrix(Matrix4[] inputModel)
{
var outputModel = new Matrix4[inputModel.Length];
for(int i = 0; i < inputModel.Length; i++)
{
outputModel[i].Row0 = inputModel[i].Column0;
outputModel[i].Row1 = inputModel[i].Column1;
outputModel[i].Row2 = inputModel[i].Column2;
outputModel[i].Row3 = inputModel[i].Column3;
}
return outputModel;
}

How to blend colours with transparency?

I want to be able to blend two or more Color objects. Say I start with a semi-transparent red:
var red = Color.FromArgb(140, 255, 0, 0);
I would then like to blend a semi-transparent green into it:
var green = Color.FromArgb(140, 0, 255, 0);
The colour mixing code snippets that I've come across usually result in a shade of brown here, but what I'm really looking for is an effect like what you'd get if you were to draw one colour over another in (say) Paint.Net, resulting in more of a dark green:-
I'd then like to mix in a third colour, say a semi-transparent blue:
var blue = Color.FromArgb(140, 0, 0, 255);
This time, I would like to end up with the teal-ish colour seen in the centre of this image:
(Again if I try to use the usual code snippets then I usually end up with a grey or brown).
As an aside, it's probably worth mentioning that the following code pretty much achieves what I'm after:
using (var bitmap = new Bitmap(300, 300))
{
using (var g = Graphics.FromImage(bitmap))
{
var c1 = Color.FromArgb(alpha: 128, red: 255, green: 0, blue: 0);
var c2 = Color.FromArgb(alpha: 200, red: 0, green: 255, blue: 0);
var c3 = Color.FromArgb(alpha: 100, red: 0, green: 0, blue: 255);
g.FillRectangle(new SolidBrush(c1), 100, 0, 100, 100);
g.FillRectangle(new SolidBrush(c2), 125, 75, 100, 100);
g.FillRectangle(new SolidBrush(c3), 75, 50, 100, 100);
}
}
I guess it's the code equivalent of my Paint.Net steps, drawing one semi-transparent coloured rectangle over another. However I want to be able to calculate the final blended colour, and use that in one call to g.FillRectangle(), rather than call the method three times to achieve the blending effect.
Finally, this is an example of the kind of colour mixing code snippet that I referred to earlier, that typically yield shades of brown when I use them for my colours:-
private Color Blend(Color c1, Color c2)
{
var aOut = c1.A + (c1.A * (255 - c1.A) / 255);
var rOut = (c1.R * c1.A + c2.R * c2.A * (255 - c1.A) / 255) / aOut;
var gOut = (c1.G * c1.A + c2.G * c2.A * (255 - c1.A) / 255) / aOut;
var bOut = (c1.B * c1.A + c2.B * c2.A * (255 - c1.A) / 255) / aOut;
return Color.FromArgb(aOut, rOut, gOut, bOut);
}
Thanks to the earlier comment from #TaW (otherwise I'd never have solved this!), I was able to write a method to blend two colours in the style of Paint.net, Paintshop, etc:-
var r = Color.FromArgb(140, 255, 0, 0);
var g = Color.FromArgb(140, 0, 255, 0);
var b = Color.FromArgb(140, 0, 0, 255);
// How to use:
var rg= AlphaComposite(r, g);
var rb= AlphaComposite(r, b);
var gb= AlphaComposite(g, b);
var rgb= AlphaComposite(AlphaComposite(r, g), b);
...
// A cache of all opacity values (0-255) scaled down to 0-1 for performance
private readonly float[] _opacities = Enumerable.Range(0, 256)
.Select(o => o / 255f)
.ToArray();
private Color AlphaComposite(Color c1, Color c2)
{
var opa1 = _opacities[c1.A];
var opa2 = _opacities[c2.A];
var ar = opa1 + opa2 - (opa1 * opa2);
var asr = opa2 / ar;
var a1 = 1 - asr;
var a2 = asr * (1 - opa1);
var ab = asr * opa1;
var r = (byte)(c1.R * a1 + c2.R * a2 + c2.R * ab);
var g = (byte)(c1.G * a1 + c2.G * a2 + c2.G * ab);
var b = (byte)(c1.B * a1 + c2.B * a2 + c2.B * ab);
return Color.FromArgb((byte)(ar * 255), r, g, b);
}

.Net getting RGB values from a bitmap using Lockbits

I am using the below code to extract RGB values from images, sometimes this works, however on certain files (seemingly where the Stride is not divisible by the width of the bitmap) it is returning mixed up values:
Dim rect As New Rectangle(0, 0, bmp.Width, bmp.Height)
Dim bmpData As System.Drawing.Imaging.BitmapData = bmp.LockBits(rect, Imaging.ImageLockMode.ReadOnly, Imaging.PixelFormat.Format24bppRgb)
Dim ptr As IntPtr = bmpData.Scan0
Dim cols As New List(Of Color)
Dim bytes As Integer = Math.Abs(bmpData.Stride) * bmp.Height
Dim rgbValues(bytes - 1) As Byte
System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes)
' Retrieve RGB values
For i = modByte To rgbValues.Length Step 3
cols.Add(Color.FromArgb(rgbValues(i + 2), rgbValues(i + 1), rgbValues(i)))
Next
bmp.UnlockBits(bmpData)
bmp.Dispose()
Dim colsCnt As List(Of RgbPixels) = cols.GroupBy(Function(g) New With {Key .R = g.R, Key .G = g.G, Key .B = g.B}).Select(Function(s) New RgbPixels With {.Colour = Color.FromArgb(s.Key.R, s.Key.G, s.Key.B), .Amount = s.Count()}).ToList()
After grouping the resulting colours, the values are something like:
R G B
255 255 255
255 255 0
255 0 0
0 0 255
0 255 255
Or some variation of that, when they should just be:
R G B
255 255 255
0 0 0
Please point me in the right direction, BTW my source bmp is in PixelFormat.Format24bppRgb too, so I don't believe that is the problem. Also if you can only answer in C# that is not a problem.
The problem is that you're not considering the stride value. Stride is always padded so that the width of the byte-array per image row is dividable by 4. This is an optimization related to memory copy and how the CPU works, that goes decades back and still is useful.
F.ex, if one image has a width of 13 pixels, the stride would be like this (simplified to one component):
============= (width 13 pixels = 13 bytes when using RGB)
================ (stride would be 16)
for an image of 14 pixels it would look like this:
============== (width 14 pixels = 14 bytes when using RGB)
================ (stride would still be 16)
So in your code you need to handle a stride row instead of a byte array, unless you are using fixed and defined widths of the images.
I modified your code so it skips rows by stride:
Dim rect As New Rectangle(0, 0, bmp.Width, bmp.Height)
Dim bmpData As System.Drawing.Imaging.BitmapData = bmp.LockBits(rect, Imaging.ImageLockMode.ReadOnly, Imaging.PixelFormat.Format24bppRgb)
Dim ptr As IntPtr = bmpData.Scan0
Dim cols As New List(Of Color)
Dim bytes As Integer = Math.Abs(bmpData.Stride) * bmp.Height
Dim rgbValues(bytes - 1) As Byte
System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes)
Dim x, y, dx, l as Integer
For y = 0 To rect.Height - 1
l = y * bmpData.Stride 'calulate line based on stride
For x = 0 To rect.Width - 1
dx = l + x * 3 '3 for RGB, 4 for ARGB, notice l is used as offset
cols.Add(Color.FromArgb(rgbValues(dx + 2), _
rgbValues(dx + 1), _
rgbValues(dx)))
Next
Next
' Retrieve RGB values
'For i = modByte To rgbValues.Length Step 3
' cols.Add(Color.FromArgb(rgbValues(i + 2), rgbValues(i + 1), rgbValues(i)))
'Next
bmp.UnlockBits(bmpData)
bmp.Dispose()

SlimDX Direct3D 11 Indexing Problems

I'm trying to draw an indexed square using SlimDX and Direct3D11. I've managed to draw a square without indices, but when I swap to my indexed version I just get a blank screen.
My input layout is set to only take position data (I'm essentially extending from the third tutorial on the SlimDX website) and to draw Triangle Lists.
My render loop code is as follows (I am using the triangle.fx pixel and vertex shader files from the tutorial, they take vertex positions (in screen coordinates) and paint them yellow, D3D is shorthand for SlimDX.Direct3D11)
//clear the render target
context.ClearRenderTargetView(renderTarget, new Color4(0.5f, 0.5f, 1.0f));
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(mesh.VertexBuffer,12, 0));
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UNorm, 0);
context.DrawIndexed(mesh.indices, 0, 0);
swapChain.Present(0, PresentFlags.None);
"mesh" is a struct that holds a Vertex buffer, Index buffer and vertex count. The data is filled here:
Vertex[] vertexes = new Vertex[4];
vertexes[0].Position = new Vector3(0, 0, 0.5f);
vertexes[1].Position = new Vector3(0, 0.5f, 0.5f);
vertexes[2].Position = new Vector3(0.5f, 0, 0.5f);
vertexes[3].Position = new Vector3(0.5f, 0.5f, 0.5f);
UInt16[] indexes = { 0, 1, 2, 1, 3, 2 };
DataStream vertices = new DataStream(12 * 4, true, true);
foreach (Vertex vertex in vertexes)
{
vertices.Write(vertex.Position);
}
vertices.Position = 0;
DataStream indices = new DataStream(sizeof(int) * 6, true, true);
foreach (UInt16 index in indexes)
{
indices.Write(index);
}
indices.Position = 0;
mesh = new Mesh();
D3D.Buffer vertexBuffer = new D3D.Buffer(device, vertices, 12 * 4, ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
mesh.VertexBuffer = vertexBuffer;
mesh.IndexBuffer = new D3D.Buffer(device, indices, 2 * 6, ResourceUsage.Default, BindFlags.IndexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
mesh.vertices = vertexes.GetLength(0);
mesh.indices = indexes.Length;
All of this is nearly identical to my unindexed square method (with the addition of index buffers and indices, and the removal of two duplicate vertices that aren't needed with indexing), but while the unindexed method draws a square, the indexed method doesn't.
My current theory is that there is either something wrong with this line:
mesh.IndexBuffer = new D3D.Buffer(device, indices, 2 * 6, ResourceUsage.Default, BindFlags.IndexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
Or these lines:
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UNorm, 0);
context.DrawIndexed(mesh.indices, 0, 0);
Why don't you just use a vertex and indexbuffer for this simple example?
Like this way (Directx9):
VertexBuffer vb;
IndexBuffer ib;
vertices = new PositionColored[WIDTH * HEIGHT];
//vertex creation
vb = new VertexBuffer(device, HEIGHT * WIDTH * PositionColored.SizeInBytes, Usage.WriteOnly, PositionColored.Format, Pool.Default);
DataStream stream = vb.Lock(0, 0, LockFlags.None);
stream.WriteRange(vertices);
vb.Unlock();
indices = new short[(WIDTH - 1) * (HEIGHT - 1) * 6];
//indicies creation
ib = new IndexBuffer(device, sizeof(int) * (WIDTH - 1) * (HEIGHT - 1) * 6, Usage.WriteOnly, Pool.Default, false);
DataStream stream = ib.Lock(0, 0, LockFlags.None);
stream.WriteRange(indices);
ib.Unlock();
//Drawing
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.DarkSlateBlue, 1.0f, 0);
device.BeginScene();
device.VertexFormat = PositionColored.Format;
device.SetStreamSource(0, vb, 0, PositionColored.SizeInBytes);
device.Indices = ib;
device.SetTransform(TransformState.World, Matrix.Translation(-HEIGHT / 2, -WIDTH / 2, 0) * Matrix.RotationZ(angle));
device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, WIDTH * HEIGHT, 0, indices.Length / 3);
device.EndScene();
device.Present();
I use the mesh in another way (directx9 code again):
private void CreateMesh()
{
meshTerrain = new Mesh(device, (WIDTH - 1) * (HEIGHT - 1) * 2, WIDTH * HEIGHT, MeshFlags.Managed, PositionColored.Format);
DataStream stream = meshTerrain.VertexBuffer.Lock(0, 0, LockFlags.None);
stream.WriteRange(vertices);
meshTerrain.VertexBuffer.Unlock();
stream.Close();
stream = meshTerrain.IndexBuffer.Lock(0, 0, LockFlags.None);
stream.WriteRange(indices);
meshTerrain.IndexBuffer.Unlock();
stream.Close();
meshTerrain.GenerateAdjacency(0.5f);
meshTerrain.OptimizeInPlace(MeshOptimizeFlags.VertexCache);
meshTerrain = meshTerrain.Clone(device, MeshFlags.Dynamic, PositionNormalColored.Format);
meshTerrain.ComputeNormals();
}
//Drawing
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.DarkSlateBlue, 1.0f, 0);
device.BeginScene();
device.VertexFormat = PositionColored.Format;
device.SetTransform(TransformState.World, Matrix.Translation(-HEIGHT / 2, -WIDTH / 2, 0) * Matrix.RotationZ(angle));
int numSubSets = meshTerrain.GetAttributeTable().Length;
for (int i = 0; i < numSubSets; i++)
{
meshTerrain.DrawSubset(i);
}
device.EndScene();
device.Present();

How do I recolor an image? (see images)

How do I achieve this kind of color replacement programmatically?
So this is the function I have used to replace a pixel:
Color.FromArgb(
oldColorInThisPixel.R + (byte)((1 - oldColorInThisPixel.R / 255.0) * colorToReplaceWith.R),
oldColorInThisPixel.G + (byte)((1 - oldColorInThisPixel.G / 255.0) * colorToReplaceWith.G),
oldColorInThisPixel.B + (byte)((1 - oldColorInThisPixel.B / 255.0) * colorToReplaceWith.B)
)
Thank you, CodeInChaos!
The formula for calculating the new pixel is:
newColor.R = OldColor;
newColor.G = OldColor;
newColor.B = 255;
Generalizing to arbitrary colors:
I assume you want to map white to white and black to that color. So the formula is newColor = TargetColor + (White - TargetColor) * Input
newColor.R = OldColor + (1 - oldColor / 255.0) * TargetColor.R;
newColor.G = OldColor + (1 - oldColor / 255.0) * TargetColor.G;
newColor.B = OldColor + (1 - oldColor / 255.0) * TargetColor.B;
And then just iterate over the pixels of the image(byte array) and write them to a new RGB array. There are many threads on how to copy an image into a byte array and manipulate it.
Easiest would be to use ColorMatrix for processing images, you will even be able to process on fly preview of desired effect - this is how many color filters are made in graphic editing applications. Here and here you can find introductions to color effects using Colormatrix in C#. By using ColorMatrix you can make colorizing filter like you want, as well as sepia, black/white, invert, range, luminosity, contrast, brightness, levels (by multi-pass) etc.
EDIT: Here is example (update - fixed color matrix to shift darker values into blue instead of previous zeroing other than blue parts - and - added 0.5f to blue because on picture above black is changed into 50% blue):
var cm = new ColorMatrix(new float[][]
{
new float[] {1, 0, 0, 0, 0},
new float[] {0, 1, 1, 0, 0},
new float[] {0, 0, 1, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {0, 0, 0.5f, 0, 1}
});
var img = Image.FromFile("C:\\img.png");
var ia = new ImageAttributes();
ia.SetColorMatrix(cm);
var bmp = new Bitmap(img.Width, img.Height);
var gfx = Graphics.FromImage(bmp);
var rect = new Rectangle(0, 0, img.Width, img.Height);
gfx.DrawImage(img, rect, 0, 0, img.Width, img.Height, GraphicsUnit.Pixel, ia);
bmp.Save("C:\\processed.png", ImageFormat.Png);
You'll want to use a ColorMatrix here. The source image is grayscale, all its R, G and B values are equal. Then it is just a matter of replacing black with RGB = (0, 0, 255) for dark blue, white with RGB = (255, 255, 255) to get white. The matrix thus can look like this:
1 0 0 0 0 // not changing red
0 1 0 0 0 // not changing green
0 0 0 0 0 // B = 0
0 0 0 1 0 // not changing alpha
0 0 1 0 1 // B = 255
This sample form reproduces the right side image:
public partial class Form1 : Form {
public Form1() {
InitializeComponent();
}
private Image mImage;
protected override void OnPaint(PaintEventArgs e) {
if (mImage != null) e.Graphics.DrawImage(mImage, Point.Empty);
base.OnPaint(e);
}
private void button1_Click(object sender, EventArgs e) {
using (var srce = Image.FromFile(#"c:\temp\grayscale.png")) {
if (mImage != null) mImage.Dispose();
mImage = new Bitmap(srce.Width, srce.Height);
float[][] coeff = {
new float[] { 1, 0, 0, 0, 0 },
new float[] { 0, 1, 0, 0, 0 },
new float[] { 0, 0, 0, 0, 0 },
new float[] { 0, 0, 0, 1, 0 },
new float[] { 0, 0, 1, 0, 1 }};
ColorMatrix cm = new ColorMatrix(coeff);
var ia = new ImageAttributes();
ia.SetColorMatrix(new ColorMatrix(coeff));
using (var gr = Graphics.FromImage(mImage)) {
gr.DrawImage(srce, new Rectangle(0, 0, mImage.Width, mImage.Height),
0, 0, mImage.Width, mImage.Height, GraphicsUnit.Pixel, ia);
}
}
this.Invalidate();
}
}
Depends a lot on what your image format is and what your final format is going to be.
Also depends on what tool you wanna use.
You may use:
GDI
GD+
Image Processing library such as OpenCV
GDI is quite fast but can be quite cumbersome. You need to change the palette.
GDI+ is exposed in .NET and can be slower but easier.
OpenCV is great but adds dependency.
(UPDATE)
This code changes the image to blue-scales instead of grey-scales - image format is 32 bit ARGB:
private static unsafe void ChangeColors(string imageFileName)
{
const int noOfChannels = 4;
Bitmap img = (Bitmap) Image.FromFile(imageFileName);
BitmapData data = img.LockBits(new Rectangle(0,0,img.Width, img.Height), ImageLockMode.ReadWrite, img.PixelFormat);
byte* ptr = (byte*) data.Scan0;
for (int j = 0; j < data.Height; j++)
{
byte* scanPtr = ptr + (j * data.Stride);
for (int i = 0; i < data.Stride; i++, scanPtr++)
{
if (i % noOfChannels == 3)
{
*scanPtr = 255;
continue;
}
if (i % noOfChannels != 0)
{
*scanPtr = 0;
}
}
}
img.UnlockBits(data);
img.Save(Path.Combine( Path.GetDirectoryName(imageFileName), "result.png"), ImageFormat.Png);
}
This code project article covers this and more: http://www.codeproject.com/KB/GDI-plus/Image_Processing_Lab.aspx
It uses the AForge.NET library to do a Hue filter on an image for a similar effect:
// create filter
AForge.Imaging.Filters.HSLFiltering filter =
new AForge.Imaging.Filters.HSLFiltering( );
filter.Hue = new IntRange( 340, 20 );
filter.UpdateHue = false;
filter.UpdateLuminance = false;
// apply the filter
System.Drawing.Bitmap newImage = filter.Apply( image );
It also depends on what you want: do you want to keep the original and only adjust the way it is shown? An effect or pixelshader in WPF might do the trick and be very fast.
If any Android devs end up looking at this, this is what I came up with to gray scale and tint an image using CodesInChaos's formula and the android graphics classes ColorMatrix and ColorMatrixColorFilter.
Thanks for the help!
public static ColorFilter getColorFilter(Context context) {
final int tint = ContextCompat.getColor(context, R.color.tint);
final float R = Color.red(tint);
final float G = Color.green(tint);
final float B = Color.blue(tint);
final float Rs = R / 255;
final float Gs = G / 255;
final float Bs = B / 255;
// resultColor = oldColor + (1 - oldColor/255) * tintColor
final float[] colorTransform = {
1, -Rs, 0, 0, R,
1, -Gs, 0, 0, G,
1, -Bs, 0, 0, B,
0, 0, 0, 0.9f, 0};
final ColorMatrix grayMatrix = new ColorMatrix();
grayMatrix.setSaturation(0f);
grayMatrix.postConcat(new ColorMatrix(colorTransform));
return new ColorMatrixColorFilter(grayMatrix);
}
The ColorFilter can then be applied to an ImageView
imageView.setColorFilter(getColorFilter(imageView.getContext()));

Categories

Resources