HLSL modify depth in pixel shader - c#

I need to render an image (with depth) which I get from outside. I can construct two textures and pass them into a shader with no problem (I can verify values sampled in a pixel shader being correct).
Here's how my HLSL looks like:
// image texture
Texture2D m_TextureColor : register(t0);
// depth texture with values [0..1]
Texture2D<float> m_TextureDepth : register(t1);
// sampler to forbid linear filtering since we're dealing with pixels
SamplerState m_TextureSampler { Filter = MIN_MAG_MIP_POINT; };
struct VS_IN
{
float4 position : POSITION;
float2 texcoord : TEXCOORD;
};
struct VS_OUT
{
float4 position : SV_POSITION;
float2 texcoord : TEXCOORD0;
};
struct PS_OUT
{
float4 color : COLOR0;
float depth : DEPTH0;
};
VS_OUT VS(VS_IN input)
{
VS_OUT output = (VS_OUT)0;
output.position = input.position;
output.texcoord = input.texcoord;
return output;
}
PS_OUT PS(VS_OUT input) : SV_Target
{
PS_OUT output = (PS_OUT)0;
output.color = m_TextureColor.SampleLevel(m_TextureSampler, input.texcoord, 0);
// I want to modify depth of the pixel,
// but it looks like it has no effect on depth no matter what I set here
output.depth = m_TextureDepth.SampleLevel(m_TextureSampler, input.texcoord, 0);
return output;
}
I construct vertex buffer from those (with PrimitiveTopology.TriangleStrip) where first argument Vector4 is position and second argument Vector2 is texture coordinate:
new[]
{
new Vertex(new Vector4(-1, -1, 0.5f, 1), new Vector2(0, 1)),
new Vertex(new Vector4(-1, 1, 0.5f, 1), new Vector2(0, 0)),
new Vertex(new Vector4(1, -1, 0.5f, 1), new Vector2(1, 1)),
new Vertex(new Vector4(1, 1, 0.5f, 1), new Vector2(1, 0)),
}
Everything works just fine: I'm seeing my image, I can sample depth from depth texture and construct something visual from it (that's how I can verify that
depth values I'm sampling are correct). However I can't figure out how to modify pixel's depth so that it would be eaten properly when the depth-test would be happening. Because at the moment it all depends on what kind of z value I set as my vertex position.
This is how I'm setting up DirectX11 (I'm using SharpDX and C#):
var swapChainDescription = new SwapChainDescription
{
BufferCount = 1,
ModeDescription = new ModeDescription(bufferSize.Width, bufferSize.Height, new Rational(60, 1), Format.R8G8B8A8_UNorm),
IsWindowed = true,
OutputHandle = HostHandle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput,
};
var swapChainFlags = DeviceCreationFlags.None | DeviceCreationFlags.BgraSupport;
SharpDX.Direct3D11.Device.CreateWithSwapChain(DriverType.Hardware, swapChainFlags, swapChainDescription, out var device, out var swapchain);
Setting back buffer and depth/stencil buffer:
// color buffer
using (var textureColor = SwapChain.GetBackBuffer<Texture2D>(0))
{
TextureColorResourceView = new RenderTargetView(Device, textureColor);
}
// depth buffer
using (var textureDepth = new Texture2D(Device, new Texture2DDescription
{
Format = Format.D32_Float,
ArraySize = 1,
MipLevels = 1,
Width = BufferSize.Width,
Height = BufferSize.Height,
SampleDescription = new SampleDescription(1, 0),
Usage = ResourceUsage.Default,
BindFlags = BindFlags.DepthStencil,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None
}))
{
TextureDepthResourceView = new DepthStencilView(Device, textureDepth);
}
DeviceContext.OutputMerger.SetTargets(TextureDepthResourceView, TextureColorResourceView);
Preparing depth stencil state:
var description = DepthStencilStateDescription.Default();
description.DepthComparison = Comparison.LessEqual;
description.IsDepthEnabled = true;
description.DepthWriteMask = DepthWriteMask.All;
DepthState = new DepthStencilState(Device, description);
And using it:
DeviceContext.OutputMerger.SetDepthStencilState(DepthState);
This is how I construct my color/depth textures I'm sending to shader:
public static (ShaderResourceView resource, Texture2D texture) CreateTextureDynamic(this Device device, System.Drawing.Size size, Format format)
{
var textureDesc = new Texture2DDescription
{
MipLevels = 1,
Format = format,
Width = size.Width,
Height = size.Height,
ArraySize = 1,
BindFlags = BindFlags.ShaderResource,
Usage = ResourceUsage.Dynamic,
SampleDescription = new SampleDescription(1, 0),
CpuAccessFlags = CpuAccessFlags.Write,
};
var texture = new Texture2D(device, textureDesc);
return (new ShaderResourceView(device, texture), texture);
}
Also since I need to update them frequently:
public static void UpdateResource(this Texture2D texture, int[] buffer, System.Drawing.Size size)
{
var dataBox = texture.Device.ImmediateContext.MapSubresource(texture, 0, MapMode.WriteDiscard, MapFlags.None, out var dataStream);
Parallel.For(0, size.Height, rowIndex => Marshal.Copy(buffer, size.Width * rowIndex, dataBox.DataPointer + dataBox.RowPitch * rowIndex, size.Width));
dataStream.Dispose();
texture.Device.ImmediateContext.UnmapSubresource(texture, 0);
}
public static void UpdateResource(this Texture2D texture, float[] buffer, System.Drawing.Size size)
{
var dataBox = texture.Device.ImmediateContext.MapSubresource(texture, 0, MapMode.WriteDiscard, MapFlags.None, out var dataStream);
Parallel.For(0, size.Height, rowIndex => Marshal.Copy(buffer, size.Width * rowIndex, dataBox.DataPointer + dataBox.RowPitch * rowIndex, size.Width));
dataStream.Dispose();
texture.Device.ImmediateContext.UnmapSubresource(texture, 0);
}
I also googled a lot about this, found similar posts like this: https://www.gamedev.net/forums/topic/573961-how-to-set-depth-value-in-pixel-shader/ however couldn't managed solve it on my side.
Thanks in advance!

To write to the depth buffer, you need to target the SV_Depth system-value semantic. So your pixel shader output struct would look more like the following:
struct PS_OUT
{
float4 color : SV_Target;
float depth : SV_Depth;
};
And the shader would not specify SV_Target as in your example (the SV_ outputs are defined within the struct). So it would look like:
PS_OUT PS(VS_OUT input)
{
PS_OUT output = (PS_OUT)0;
output.color = m_TextureColor.SampleLevel(m_TextureSampler, input.texcoord, 0);
// Now that output.depth is defined with SV_Depth, and you have depth-write enabled,
// this should write to the depth buffer.
output.depth = m_TextureDepth.SampleLevel(m_TextureSampler, input.texcoord, 0);
return output;
}
Note that you may incur some performance penalties on explicitly writing to depth (specifically on AMD hardware) since that forces a bypass of their early-depth hardware optimization. All future draw calls using that depth buffer will have early-Z optimizations disabled, so it's generally a good idea to perform the depth-write operation as late as possible.

Related

Sharpdx Dynamic Texture Updating Incorrectly

This is C# WPF using SharpDX 4.0.
I'm trying to update a dynamic texture on each render loop using a color buffer generated from a library. I'm seeing an issue where the resulting texture doesn't match the expected bitmap. The texture appears to be wider than expect or the format is larger than expected.
var surfaceWidth = 200; var surfaceHeight = 200;
var pixelBytes = surfaceWidth * surfaceHeight * 4;
//Set up the color buffer and byte array to stream to the texture
_colorBuffer = new int[surfaceWidth * surfaceHeight];
_textureStreamBytes = new byte[pixelBytes]; //16000 length
//Create the texture to update
_scanTexture = new Texture2D(Device, new Texture2DDescription()
{
Format = Format.B8G8R8A8_UNorm,
ArraySize = 1,
MipLevels = 1,
Width = SurfaceWidth,
Height = SurfaceHeight,
SampleDescription = new SampleDescription(1, 0),
Usage = ResourceUsage.Dynamic,
BindFlags = BindFlags.ShaderResource,
CpuAccessFlags = CpuAccessFlags.Write,
OptionFlags = ResourceOptionFlags.None,
});
_scanResourceView= new ShaderResourceView(Device, _scanTexture);
context.PixelShader.SetShaderResource(0, _scanResourceView);
And on render I populate the color buffer and write to the texture.
protected void Render()
{
Device.ImmediateContext.ClearRenderTargetView(
RenderTargetView, new SharpDX.Mathematics.Interop.RawColor4(0.8f,0.8f,0,1));
Library.GenerateColorBuffer(ref _colorBuffer);
System.Buffer.BlockCopy(_colorBuffer, 0, depthPixels, 0, depthPixels.Length);
_parent.DrawBitmap(ref _colorBuffer);
DataBox databox = context.MapSubresource(_scanTexture, 0, MapMode.WriteDiscard, SharpDX.Direct3D11.MapFlags.None, out DataStream stream);
if (!databox.IsEmpty)
stream.Write(_textureStreamBytes, 0, _textureStreamBytes.Length);
context.UnmapSubresource(_scanTexture, 0);
context.Draw(4, 0);
}
Sampler creation and setting before the above happens:
var sampler = new SamplerState(_device, new SamplerStateDescription()
{
Filter = SharpDX.Direct3D11.Filter.MinMagMipLinear,
AddressU = TextureAddressMode.Wrap,
AddressV = TextureAddressMode.Wrap,
AddressW = TextureAddressMode.Wrap,
BorderColor = SharpDX.Color.Blue,
ComparisonFunction = Comparison.Never,
MaximumAnisotropy = 1,
MipLodBias = 0,
MinimumLod = 0,
MaximumLod = 0,
});
context = _device.ImmediateContext;
context.InputAssembler.PrimitiveTopology = PrimitiveTopology.TriangleStrip;
context.VertexShader.Set(vertexShader);
context.Rasterizer.SetViewport(new Viewport(0, 0, SurfaceWidth, SurfaceHeight, 0.0f, 1.0f));
context.PixelShader.Set(pixelShader);
context.PixelShader.SetSampler(0, sampler);
context.OutputMerger.SetTargets(depthView, _renderTargetView);
And shader (using a full screen triangle with no vertices):
SamplerState pictureSampler;
Texture2D picture;
struct PS_IN
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD;
};
PS_IN VS(uint vI : SV_VERTEXID)
{
float2 texcoord = float2(vI & 1,vI >> 1); //you can use these for texture coordinates later
PS_IN output = (PS_IN)0;
output.pos = float4((texcoord.x - 0.5f) * 2, -(texcoord.y - 0.5f) * 2, 0, 1);
output.tex = texcoord;
return output;
}
float4 PS(PS_IN input) : SV_Target
{
return picture.Sample(pictureSampler, input.tex);
}
What I'm seeing is:
_colorBuffer length 40000 (200 width *200 height)
_textureStreamBytes length 160000 (200 * 200 * 4bytes)
Stream from databox Length = 179200 difference of 19200 bytes / 4800 pixels.
This translates to 24 rows of 200 pixel width. In other words the texture is 24 pixels wider than expected. But debugging shows width/height as 200.
Image showing the issue. Left is rendered view, right is bitmap
Does anyone know what I'm doing wrong here? Or things that should/could be done differently?
Thank you.
P.S. I've got this working correctly in OpenGL by using a similar process but need to get it working for directx:
gl.TexSubImage2D(OpenGL.GL_TEXTURE_2D, 0, 0, 0, (int)width, (int)height, OpenGL.GL_RGBA, OpenGL.GL_UNSIGNED_BYTE, colorBuffer);
From experimenting it appears that multiples of 32 are needed for both width and height. For example 100 * 128 even though a multiple of 32 will cause an issue. Instead I'm using:
var newHeight = (int)(initialHeight / 32) * 32;
var newWidth = (int)(initialWidth / 32) * 32;
I'm not sure if the root issue is my own mistake or a sharpDX issue or a DirectX issue. The other way I see to solve this issue is to add padding to the pixel array to account for the difference in length at the end of each row.

Texture appearing as stretched in DirectX11

I'm using DirectX11 with SharpDx & WPF D3DImage in C# to render image as texture.
Previously i was updating my render target (which worked perfectly) and rather not using the pixel shader to display the updated texture within my quad.
Now having realized my mistake i decided to use the pixel shaders so that i can implement letterboxing technique with the help of view port.
In doing so I'm kinda not able to figure out why my rendered texture is a sub region of the image which in turn is displayed as stretched .
Code Used:
a) Initialisation code
var device = this.Device;
var context = device.ImmediateContext;
byte[] fileBytes = GetShaderEffectsFileBytes();
var vertexShaderByteCode = ShaderBytecode.Compile(fileBytes, "VSMain", "vs_5_0", ShaderFlags.None, EffectFlags.None);
var vertexShader = new VertexShader(device, vertexShaderByteCode);
var pixelShaderByteCode = ShaderBytecode.Compile(fileBytes, "PSMain", "ps_5_0", ShaderFlags.None, EffectFlags.None);
var pixelShader = new PixelShader(device, pixelShaderByteCode);
layout = new InputLayout(device, vertexShaderByteCode, new[] {
new InputElement("SV_Position", 0, Format.R32G32B32A32_Float, 0, 0),
new InputElement("TEXCOORD", 0, Format.R32G32_Float, 16, 0),
});
// Write vertex data to a datastream
var stream = new DataStream(Utilities.SizeOf<VertexPositionTexture>() * 6, true, true);
stream.WriteRange(new[]
{
new VertexPositionTexture(
new Vector4(1, 1, 0.5f, 1.0f), // position top-left
new Vector2(1.0f, 0.0f)
),
new VertexPositionTexture(
new Vector4(1, -1, 0.5f, 1.0f), // position top-right
new Vector2(1.0f, 1.0f)
),
new VertexPositionTexture(
new Vector4(-1, 1, 0.5f, 1.0f), // position bottom-left
new Vector2(0.0f, 0.0f)
),
new VertexPositionTexture(
new Vector4(-1, -1, 0.5f, 1.0f), // position bottom-right
new Vector2(0.0f, 1.0f)
),
});
stream.Position = 0;
vertices = new SharpDX.Direct3D11.Buffer(device, stream, new BufferDescription()
{
BindFlags = BindFlags.VertexBuffer,
CpuAccessFlags = CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
SizeInBytes = Utilities.SizeOf<VertexPositionTexture>() * 6,
Usage = ResourceUsage.Default,
StructureByteStride = 0
});
stream.Dispose();
context.InputAssembler.InputLayout = (layout);
context.InputAssembler.PrimitiveTopology = (PrimitiveTopology.TriangleStrip);
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(vertices, Utilities.SizeOf<VertexPositionTexture>(), 0));
context.VertexShader.Set(vertexShader);
context.GeometryShader.Set(null);
context.PixelShader.Set(pixelShader);
Device.ImmediateContext.OutputMerger.SetTargets(m_depthStencilView, m_RenderTargetView);
this.ImgSource.SetRenderTargetDX11(this.RenderTarget);
Texture2D flower = Texture2D.FromFile<Texture2D>(this.Device, "3.jpg");
var srv = new ShaderResourceView(this.Device, flower);
Device.ImmediateContext.PixelShader.SetShaderResource(0, srv);
srv.Dispose();
b) VertexPositionTexture
public struct VertexPositionTexture
{
public VertexPositionTexture(Vector4 position, Vector2 textureUV)
{
Position = position;
TextureUV = textureUV;
}
public Vector4 Position;
public Vector2 TextureUV;
}
c) Rendering Code
Device.ImmediateContext.ClearRenderTargetView(this.m_RenderTargetView, new Color4(Color.Blue.R, Color.Blue.G, Color.Blue.B, Color.Blue.A));
Device.ImmediateContext.ClearDepthStencilView(m_depthStencilView, DepthStencilClearFlags.Depth, 1.0f, 0);
Device.ImmediateContext.Draw(4, 0);
Device.ImmediateContext.Flush();
this.ImgSource.InvalidateD3DImage();
d) Shader Effects File:
Texture2D ShaderTexture : register(t0);
SamplerState Sampler : register (s0);
struct VertexShaderInput
{
float4 Position : SV_Position;
float2 TextureUV : TEXCOORD0;
};
struct VertexShaderOutput
{
float4 Position : SV_Position;
float2 TextureUV : TEXCOORD0;
};
VertexShaderOutput VSMain(VertexShaderInput input)
{
VertexShaderOutput output = (VertexShaderOutput)0;
output.Position = input.Position;
output.TextureUV = input.TextureUV;
return output;
}
float4 PSMain(VertexShaderOutput input) : SV_Target
{
return ShaderTexture.Sample(Sampler, input.TextureUV);
}
Also below i have added a screenshot of the issue i have been having and actual image that should be rendered.
Actual image
Failed texture rendered
Any suggestions or help would be really helpful as I have researched the web and various forums but had no luck .
Thanks.
Link to Sample Test Application

SharpDX DataStream-based buffer initialization fails

I am trying to create a very basic mesh renderer using D3D11 to use in my final project for school. Although I followed the basic online tutorials like the rastertek site's and Frank De Luna's book to the letter, used the simplest passthrough shader imaginable, etc, I couldn't get my triangles to show up on the screen. Finally I found out about VS 2013's graphics debugging ability, and I was able to see that my vertex and index buffers were filled with garbage data. I've hosted the solution here if you want to run the code, but can someone familiar with D3D and/or its SharpDX C# wrapper tell me what I'm doing wrong in the following code?
This is my geometry data. The Vertex struct has Vector4 position and color fields, and Index is an alias for ushort.
var vertices = new[]
{
new Vertex(new Vector4(-1, 1, 0, 1), Color.Red),
new Vertex(new Vector4(1, 1, 0, 1), Color.Green),
new Vertex(new Vector4(1, -1, 0, 1), Color.Blue),
new Vertex(new Vector4(-1, -1, 0, 1), Color.White)
};
var indices = new Index[]
{
0, 2, 1,
0, 3, 2
};
And here is the code that fails to initialize my vertex and index buffers with the above data.
var vStream = new DataStream(sizeInBytes: vertices.Length * sizeof(Vertex), canRead: false, canWrite: true);
var iStream = new DataStream(sizeInBytes: indices.Length * sizeof(Index), canRead: false, canWrite: true);
{
vStream.WriteRange(vertices);
iStream.WriteRange(indices);
vBuffer = new Buffer(
device, vStream, new BufferDescription(
vertices.Length * sizeof(Vertex),
ResourceUsage.Immutable,
BindFlags.VertexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0)) { DebugName = "Vertex Buffer" };
iBuffer = new Buffer(
device, iStream, new BufferDescription(
indices.Length * sizeof(Index),
ResourceUsage.Immutable,
BindFlags.IndexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0)) { DebugName = "Index Buffer" };
}
If I replace the above code with the following, however, it works. I have no idea what I'm doing wrong.
vBuffer = Buffer.Create(
device, vertices, new BufferDescription(
vertices.Length * sizeof(Vertex),
ResourceUsage.Immutable,
BindFlags.VertexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0));
vBuffer.DebugName = "Vertex Buffer";
iBuffer = Buffer.Create(
device, indices, new BufferDescription(
indices.Length * sizeof(Index),
ResourceUsage.Immutable,
BindFlags.IndexBuffer,
CpuAccessFlags.None,
ResourceOptionFlags.None,
0));
iBuffer.DebugName = "Index Buffer";
You need to reset the stream position to zero (like iStream.Position = 0) before passing it to new Buffer(...)

SlimDX Direct3D 11 Indexing Problems

I'm trying to draw an indexed square using SlimDX and Direct3D11. I've managed to draw a square without indices, but when I swap to my indexed version I just get a blank screen.
My input layout is set to only take position data (I'm essentially extending from the third tutorial on the SlimDX website) and to draw Triangle Lists.
My render loop code is as follows (I am using the triangle.fx pixel and vertex shader files from the tutorial, they take vertex positions (in screen coordinates) and paint them yellow, D3D is shorthand for SlimDX.Direct3D11)
//clear the render target
context.ClearRenderTargetView(renderTarget, new Color4(0.5f, 0.5f, 1.0f));
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(mesh.VertexBuffer,12, 0));
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UNorm, 0);
context.DrawIndexed(mesh.indices, 0, 0);
swapChain.Present(0, PresentFlags.None);
"mesh" is a struct that holds a Vertex buffer, Index buffer and vertex count. The data is filled here:
Vertex[] vertexes = new Vertex[4];
vertexes[0].Position = new Vector3(0, 0, 0.5f);
vertexes[1].Position = new Vector3(0, 0.5f, 0.5f);
vertexes[2].Position = new Vector3(0.5f, 0, 0.5f);
vertexes[3].Position = new Vector3(0.5f, 0.5f, 0.5f);
UInt16[] indexes = { 0, 1, 2, 1, 3, 2 };
DataStream vertices = new DataStream(12 * 4, true, true);
foreach (Vertex vertex in vertexes)
{
vertices.Write(vertex.Position);
}
vertices.Position = 0;
DataStream indices = new DataStream(sizeof(int) * 6, true, true);
foreach (UInt16 index in indexes)
{
indices.Write(index);
}
indices.Position = 0;
mesh = new Mesh();
D3D.Buffer vertexBuffer = new D3D.Buffer(device, vertices, 12 * 4, ResourceUsage.Default, BindFlags.VertexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
mesh.VertexBuffer = vertexBuffer;
mesh.IndexBuffer = new D3D.Buffer(device, indices, 2 * 6, ResourceUsage.Default, BindFlags.IndexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
mesh.vertices = vertexes.GetLength(0);
mesh.indices = indexes.Length;
All of this is nearly identical to my unindexed square method (with the addition of index buffers and indices, and the removal of two duplicate vertices that aren't needed with indexing), but while the unindexed method draws a square, the indexed method doesn't.
My current theory is that there is either something wrong with this line:
mesh.IndexBuffer = new D3D.Buffer(device, indices, 2 * 6, ResourceUsage.Default, BindFlags.IndexBuffer, CpuAccessFlags.None, ResourceOptionFlags.None, 0);
Or these lines:
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UNorm, 0);
context.DrawIndexed(mesh.indices, 0, 0);
Why don't you just use a vertex and indexbuffer for this simple example?
Like this way (Directx9):
VertexBuffer vb;
IndexBuffer ib;
vertices = new PositionColored[WIDTH * HEIGHT];
//vertex creation
vb = new VertexBuffer(device, HEIGHT * WIDTH * PositionColored.SizeInBytes, Usage.WriteOnly, PositionColored.Format, Pool.Default);
DataStream stream = vb.Lock(0, 0, LockFlags.None);
stream.WriteRange(vertices);
vb.Unlock();
indices = new short[(WIDTH - 1) * (HEIGHT - 1) * 6];
//indicies creation
ib = new IndexBuffer(device, sizeof(int) * (WIDTH - 1) * (HEIGHT - 1) * 6, Usage.WriteOnly, Pool.Default, false);
DataStream stream = ib.Lock(0, 0, LockFlags.None);
stream.WriteRange(indices);
ib.Unlock();
//Drawing
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.DarkSlateBlue, 1.0f, 0);
device.BeginScene();
device.VertexFormat = PositionColored.Format;
device.SetStreamSource(0, vb, 0, PositionColored.SizeInBytes);
device.Indices = ib;
device.SetTransform(TransformState.World, Matrix.Translation(-HEIGHT / 2, -WIDTH / 2, 0) * Matrix.RotationZ(angle));
device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, WIDTH * HEIGHT, 0, indices.Length / 3);
device.EndScene();
device.Present();
I use the mesh in another way (directx9 code again):
private void CreateMesh()
{
meshTerrain = new Mesh(device, (WIDTH - 1) * (HEIGHT - 1) * 2, WIDTH * HEIGHT, MeshFlags.Managed, PositionColored.Format);
DataStream stream = meshTerrain.VertexBuffer.Lock(0, 0, LockFlags.None);
stream.WriteRange(vertices);
meshTerrain.VertexBuffer.Unlock();
stream.Close();
stream = meshTerrain.IndexBuffer.Lock(0, 0, LockFlags.None);
stream.WriteRange(indices);
meshTerrain.IndexBuffer.Unlock();
stream.Close();
meshTerrain.GenerateAdjacency(0.5f);
meshTerrain.OptimizeInPlace(MeshOptimizeFlags.VertexCache);
meshTerrain = meshTerrain.Clone(device, MeshFlags.Dynamic, PositionNormalColored.Format);
meshTerrain.ComputeNormals();
}
//Drawing
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.DarkSlateBlue, 1.0f, 0);
device.BeginScene();
device.VertexFormat = PositionColored.Format;
device.SetTransform(TransformState.World, Matrix.Translation(-HEIGHT / 2, -WIDTH / 2, 0) * Matrix.RotationZ(angle));
int numSubSets = meshTerrain.GetAttributeTable().Length;
for (int i = 0; i < numSubSets; i++)
{
meshTerrain.DrawSubset(i);
}
device.EndScene();
device.Present();

How do I recolor an image? (see images)

How do I achieve this kind of color replacement programmatically?
So this is the function I have used to replace a pixel:
Color.FromArgb(
oldColorInThisPixel.R + (byte)((1 - oldColorInThisPixel.R / 255.0) * colorToReplaceWith.R),
oldColorInThisPixel.G + (byte)((1 - oldColorInThisPixel.G / 255.0) * colorToReplaceWith.G),
oldColorInThisPixel.B + (byte)((1 - oldColorInThisPixel.B / 255.0) * colorToReplaceWith.B)
)
Thank you, CodeInChaos!
The formula for calculating the new pixel is:
newColor.R = OldColor;
newColor.G = OldColor;
newColor.B = 255;
Generalizing to arbitrary colors:
I assume you want to map white to white and black to that color. So the formula is newColor = TargetColor + (White - TargetColor) * Input
newColor.R = OldColor + (1 - oldColor / 255.0) * TargetColor.R;
newColor.G = OldColor + (1 - oldColor / 255.0) * TargetColor.G;
newColor.B = OldColor + (1 - oldColor / 255.0) * TargetColor.B;
And then just iterate over the pixels of the image(byte array) and write them to a new RGB array. There are many threads on how to copy an image into a byte array and manipulate it.
Easiest would be to use ColorMatrix for processing images, you will even be able to process on fly preview of desired effect - this is how many color filters are made in graphic editing applications. Here and here you can find introductions to color effects using Colormatrix in C#. By using ColorMatrix you can make colorizing filter like you want, as well as sepia, black/white, invert, range, luminosity, contrast, brightness, levels (by multi-pass) etc.
EDIT: Here is example (update - fixed color matrix to shift darker values into blue instead of previous zeroing other than blue parts - and - added 0.5f to blue because on picture above black is changed into 50% blue):
var cm = new ColorMatrix(new float[][]
{
new float[] {1, 0, 0, 0, 0},
new float[] {0, 1, 1, 0, 0},
new float[] {0, 0, 1, 0, 0},
new float[] {0, 0, 0, 1, 0},
new float[] {0, 0, 0.5f, 0, 1}
});
var img = Image.FromFile("C:\\img.png");
var ia = new ImageAttributes();
ia.SetColorMatrix(cm);
var bmp = new Bitmap(img.Width, img.Height);
var gfx = Graphics.FromImage(bmp);
var rect = new Rectangle(0, 0, img.Width, img.Height);
gfx.DrawImage(img, rect, 0, 0, img.Width, img.Height, GraphicsUnit.Pixel, ia);
bmp.Save("C:\\processed.png", ImageFormat.Png);
You'll want to use a ColorMatrix here. The source image is grayscale, all its R, G and B values are equal. Then it is just a matter of replacing black with RGB = (0, 0, 255) for dark blue, white with RGB = (255, 255, 255) to get white. The matrix thus can look like this:
1 0 0 0 0 // not changing red
0 1 0 0 0 // not changing green
0 0 0 0 0 // B = 0
0 0 0 1 0 // not changing alpha
0 0 1 0 1 // B = 255
This sample form reproduces the right side image:
public partial class Form1 : Form {
public Form1() {
InitializeComponent();
}
private Image mImage;
protected override void OnPaint(PaintEventArgs e) {
if (mImage != null) e.Graphics.DrawImage(mImage, Point.Empty);
base.OnPaint(e);
}
private void button1_Click(object sender, EventArgs e) {
using (var srce = Image.FromFile(#"c:\temp\grayscale.png")) {
if (mImage != null) mImage.Dispose();
mImage = new Bitmap(srce.Width, srce.Height);
float[][] coeff = {
new float[] { 1, 0, 0, 0, 0 },
new float[] { 0, 1, 0, 0, 0 },
new float[] { 0, 0, 0, 0, 0 },
new float[] { 0, 0, 0, 1, 0 },
new float[] { 0, 0, 1, 0, 1 }};
ColorMatrix cm = new ColorMatrix(coeff);
var ia = new ImageAttributes();
ia.SetColorMatrix(new ColorMatrix(coeff));
using (var gr = Graphics.FromImage(mImage)) {
gr.DrawImage(srce, new Rectangle(0, 0, mImage.Width, mImage.Height),
0, 0, mImage.Width, mImage.Height, GraphicsUnit.Pixel, ia);
}
}
this.Invalidate();
}
}
Depends a lot on what your image format is and what your final format is going to be.
Also depends on what tool you wanna use.
You may use:
GDI
GD+
Image Processing library such as OpenCV
GDI is quite fast but can be quite cumbersome. You need to change the palette.
GDI+ is exposed in .NET and can be slower but easier.
OpenCV is great but adds dependency.
(UPDATE)
This code changes the image to blue-scales instead of grey-scales - image format is 32 bit ARGB:
private static unsafe void ChangeColors(string imageFileName)
{
const int noOfChannels = 4;
Bitmap img = (Bitmap) Image.FromFile(imageFileName);
BitmapData data = img.LockBits(new Rectangle(0,0,img.Width, img.Height), ImageLockMode.ReadWrite, img.PixelFormat);
byte* ptr = (byte*) data.Scan0;
for (int j = 0; j < data.Height; j++)
{
byte* scanPtr = ptr + (j * data.Stride);
for (int i = 0; i < data.Stride; i++, scanPtr++)
{
if (i % noOfChannels == 3)
{
*scanPtr = 255;
continue;
}
if (i % noOfChannels != 0)
{
*scanPtr = 0;
}
}
}
img.UnlockBits(data);
img.Save(Path.Combine( Path.GetDirectoryName(imageFileName), "result.png"), ImageFormat.Png);
}
This code project article covers this and more: http://www.codeproject.com/KB/GDI-plus/Image_Processing_Lab.aspx
It uses the AForge.NET library to do a Hue filter on an image for a similar effect:
// create filter
AForge.Imaging.Filters.HSLFiltering filter =
new AForge.Imaging.Filters.HSLFiltering( );
filter.Hue = new IntRange( 340, 20 );
filter.UpdateHue = false;
filter.UpdateLuminance = false;
// apply the filter
System.Drawing.Bitmap newImage = filter.Apply( image );
It also depends on what you want: do you want to keep the original and only adjust the way it is shown? An effect or pixelshader in WPF might do the trick and be very fast.
If any Android devs end up looking at this, this is what I came up with to gray scale and tint an image using CodesInChaos's formula and the android graphics classes ColorMatrix and ColorMatrixColorFilter.
Thanks for the help!
public static ColorFilter getColorFilter(Context context) {
final int tint = ContextCompat.getColor(context, R.color.tint);
final float R = Color.red(tint);
final float G = Color.green(tint);
final float B = Color.blue(tint);
final float Rs = R / 255;
final float Gs = G / 255;
final float Bs = B / 255;
// resultColor = oldColor + (1 - oldColor/255) * tintColor
final float[] colorTransform = {
1, -Rs, 0, 0, R,
1, -Gs, 0, 0, G,
1, -Bs, 0, 0, B,
0, 0, 0, 0.9f, 0};
final ColorMatrix grayMatrix = new ColorMatrix();
grayMatrix.setSaturation(0f);
grayMatrix.postConcat(new ColorMatrix(colorTransform));
return new ColorMatrixColorFilter(grayMatrix);
}
The ColorFilter can then be applied to an ImageView
imageView.setColorFilter(getColorFilter(imageView.getContext()));

Categories

Resources