I am trying to make an HDR rendering pass in my 3D app.
I understand that in order to get the average light on a scene, you'd need to downsample the render output down to 1x1 texture. This is where I'm struggling with at the moment.
I have set up a 1x1 resolution render target to which I'm going to draw the previous render output. I have tried drawing the output to that render target using a simple SpriteBatch Draw call and using target rectangle. However it was too much to hope for I'd discover something nobody else thought of, the result of this was not actually entire scene downsampled to 1x1 texture, it appears as if only the top left pixel was being drawn, no matter how much I played with target rectangles or Scale overloads.
Now, I'm trying to do another screen-quad render pass, using a shader technique to sample out the scene, and render a single pixel into the render target. So to be fair, title is a bit misleading, what I'm trying to do is sample a grid of pixels spread evenly across the surface, and average those out. But this is where I'm stumped.
I have come across this tutorial:
http://www.xnainfo.com/content.php?content=28
In the file that can be downloaded, there are several examples of downsampling and one that I like most is using loop that goes through 16 pixels, averages them out and returns that.
Nothing I did so far managed to produce viable output. The downsampled texture is rendered to the corner of my screen for debugging purposes.
I have modified the HLSL code to look like this:
pixelShaderStruct vertShader(vertexShaderStruct input)
{
pixelShaderStruct output;
output.position = float4(input.pos, 1);
output.texCoord = input.texCoord + 0.5f;
return output;
};
float4 PixelShaderFunction(pixelShaderStruct input) : COLOR0
{
float4 color = 0;
float2 position = input.texCoord;
for(int x = 0; x < 4; x++)
{
for (int y = 0; y < 4; y++)
{
color += tex2D(getscene, position + float2(offsets[x], offsets[y]));
}
}
color /= 16;
return color;
}
This particular line here is where I believe I'm making the error:
color += tex2D(getscene, position + float2(offsets[x], offsets[y]));
I have never properly understood how the texCoord values, used in tex2D sampling, work.
When making motion blur effect, I had to forward such infinitismal values that I was afraid it'd get rounded off to zero, in order to produce a normal looking effect, while other times, forwarding large values like 30 and 50 was neccessary to produce effects that occupy maybe one third of the screen.
So anyway, my question is:
Given a screen quad (so, flat surface), how do I increment or modify the texCoord values to have a grid of pixels, evenly spread out, sampled across it?
I have tried using:
color += tex2D(getscene, position + float2(offsets[x] * (1/maxX), offsets[y] * (1/maxY)));
Where maxX and maxY are screen resolution,
color += tex2D(getscene, position + float2(offsets[x] * x, offsets[y] * y));
...and other "shots in the dark", and all results have ended up being the same: the final resulting pixel appears to be identical to the one in the exact middle of my screen, as if that's the only one being sampled.
How to solve that?
Also, how do those texture coordinates work? Where is (0,0)? What's the maximum?
Thanks all in advance.
I have solved the problem.
I believed that using a dozen renderTargets which half the resolution each step would be expensive to do, but I was wrong.
On a middle-end GPU in 2013, nVidia GTX 560, the cost of rerendering to 10 render targets was not noticeable, specific numbers: from 230 FPS the performance dropped to some 220 FPS.
The solution follows. It is implied you already have your entire scene processed and rendered to a renderTarget, which in my case is "renderOutput".
First, I declare a renderTarget array:
public RenderTarget2D[] HDRsampling;
Next, I calculate how many targets I will need in my Load() method, which is called between the Menu update and Game update loops (transition state for loading game assets not required in menu), and initialize them properly:
int counter = 0;
int downX = Game1.maxX;
int downY = Game1.maxY;
do
{
downX /= 2;
downY /= 2;
counter++;
} while (downX > 1 && downY > 1);
HDRsampling = new RenderTarget2D[counter];
downX = Game1.maxX / 2;
downY = Game1.maxY / 2;
for (int i = 0; i < counter; i++)
{
HDRsampling[i] = new RenderTarget2D(Game1.graphics.GraphicsDevice, downX, downY);
downX /= 2;
downY /= 2;
}
And finally, C# rendering code is as follows:
if (settings.HDRpass)
{ //HDR Rendering passes
//Uses Hardware bilinear downsampling method to obtain 1x1 texture as scene average
Game1.graphics.GraphicsDevice.SetRenderTarget(HDRsampling[0]);
Game1.graphics.GraphicsDevice.Clear(ClearOptions.Target, Color.Black, 0, 0);
downsampler.Parameters["maxX"].SetValue(HDRsampling[0].Width);
downsampler.Parameters["maxY"].SetValue(HDRsampling[0].Height);
downsampler.Parameters["scene"].SetValue(renderOutput);
downsampler.CurrentTechnique.Passes[0].Apply();
quad.Render();
for (int i = 1; i < HDRsampling.Length; i++)
{ //Downsample the scene texture repeadetly until last HDRSampling target, which should be 1x1 pixel
Game1.graphics.GraphicsDevice.SetRenderTarget(HDRsampling[i]);
Game1.graphics.GraphicsDevice.Clear(ClearOptions.Target, Color.Black, 0, 0);
downsampler.Parameters["maxX"].SetValue(HDRsampling[i].Width);
downsampler.Parameters["maxY"].SetValue(HDRsampling[i].Height);
downsampler.Parameters["scene"].SetValue(HDRsampling[i-1]);
downsampler.CurrentTechnique.Passes[0].Apply();
quad.Render();
}
//assign the 1x1 pixel
downsample1x1 = HDRsampling[HDRsampling.Length - 1];
Game1.graphics.GraphicsDevice.SetRenderTarget(extract);
//switch out rendertarget so we can send the 1x1 sample to the shader.
bloom.Parameters["downSample1x1"].SetValue(downsample1x1);
}
This obtains the downSample1x1 texture, which is later used in the final pass of the final shader.
The shader code for actual downsampling is barebones simple:
texture2D scene;
sampler getscene = sampler_state
{
texture = <scene>;
MinFilter = linear;
MagFilter = linear;
MipFilter = point;
MaxAnisotropy = 1;
AddressU = CLAMP;
AddressV = CLAMP;
};
float maxX, maxY;
struct vertexShaderStruct
{
float3 pos : POSITION0;
float2 texCoord : TEXCOORD0;
};
struct pixelShaderStruct
{
float4 position : POSITION0;
float2 texCoord : TEXCOORD0;
};
pixelShaderStruct vertShader(vertexShaderStruct input)
{
pixelShaderStruct output;
float2 offset = float2 (0.5 / maxX, 0.5/maxY);
output.position = float4(input.pos, 1);
output.texCoord = input.texCoord + offset;
return output;
};
float4 PixelShaderFunction(pixelShaderStruct input) : COLOR0
{
return tex2D(getscene, input.texCoord);
}
technique Sample
{
pass P1
{
VertexShader = compile vs_2_0 vertShader();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
How you implement your scene average luminosity is up to you, I'm still experimenting with all that, but I hope this helps somebody out there!
The tutorial you linked is a good demonstration of how to do Tonemapping, but it sounds to me like you haven't performed the repeated downsampling that's required to get to a 1x1 image.
You can't simply take a high resolution image (1920x1080, say) take 16 arbitrary pixels, add them up and divide by 16 and call that your luminance.
You need to repeatedly downsample the source image to smaller and smaller textures, usually by half in each dimension at every stage. Each pixel of the resulting downsample is an average of a 2x2 grid of pixels on the previous texture (this is handled by bilinear sampling). Eventually you'll end up with a 1x1 image that is the average colour value for the entire original 1920x1080 image and from that you can calculate the average luminance of the source image.
Without repeated downsampling, your luminance calculation is going to be a very noisy value since its input is a mere 16 of the ~2M pixels in the original image. To get a correct and smooth luminance every pixel in the original image needs to contribute.
Related
First of all, please understand that sentences may not be smooth using a translator.
I'm going to combine the two textures and create them into one texture.
Example) 1. image__ 2. image__ 3. result__
A simple combination of two textures is not a problem.
The problem is that translate, rotate and scale should be applied to one texture and merged, but I can't think of a way. I'd appreciate it if you could help me.
here my code :
Texture2D CombineTexutes(Texture2D _textureA, Texture2D _textureB, int startX = 0, int startY = 0)
{
//Create new textures
Texture2D textureResult = new Texture2D(_textureA.width, _textureA.height);
//create clone form texture
textureResult.SetPixels(_textureA.GetPixels());
//Now copy texture B in texutre A
for (int x = startX; x < _textureB.width + startX; x++)
{
for (int y = startY; y < _textureB.height + startY; y++)
{
Color c = _textureB.GetPixel(x - startX, y - startY);
if (c.a > 0.0f) //Is not transparent
{
//Copy pixel colot in TexturaA
textureResult.SetPixel(x, y, c);
}
}
}
//Apply colors
textureResult.Apply();
return textureResult;
}
This is relatively tricky with pixel transformations. You would have to read up on various algorithms for every of those tasks.
A much easier solution would be to use game objects with sprite renderers and then render everything onto a "render texture" (if you really need a texture).
Game objects with a sprite renderer can be easily rotated, scaled etc.
Your result image does look as if it was actually deforming the sprite though, not just scaled in a direction.
If that's your goal, then I'd recommend looking at the Animation 2D package. That allows you to do near anything with sprites.
I'm having a little trouble with a texture2d I'm trying to draw in XNA. Basically, I have a power-up "cooldown fill-effect" going on. I have a texture that I'm trying to draw partially as the cooldown decreases. So, for example, at 10% cooldown done I'm drawing only 10% cooldown of of the texture (the bottom), 20% done only 20% of the bottom of the texture, and so on.
The problem I'm having is, when drawing the texture, it keeps wobbling as it fills up.
Note that below, ActiveSkillTexture is my preloaded fill texture. It's size is the size of the fully filled graphic.
InterfaceDrawer.Draw is a method that calls SpriteBatch.Draw, but does some extra stuff beforehand. For all intents and purposes, it's the same as SpriteBatch.Draw.
Scale is my scale factor, it's just a float between 0 and 1.
MyDest is a pre-calculated position for where this texture should draw (from the top-left, as usual).
Here's a snippet of code:
Rectangle NewBounds = ActiveSkillTexture.Bounds;
float cooldown = GetCooldown(ActiveSkillId);
if (cooldown > 0) //cooldown timer
{
//Code that calculated cooldown percent which I'm leaving out
if (percentdone != 1) //the percentage the cooldown is done
{
//code for fill-from bottom --
float SubHeight = ActiveSkillTexture.Height * percentremaining;
float NewHeight = ActiveSkillTexture.Height * percentdone;
NewBounds.Y += (int) SubHeight;
NewBounds.Height = (int) NewHeight;
MyDest.Y += SubHeight * Scale;
}
}
if (ActiveSkillTexture != null)
InterfaceDrawer.Draw(SpriteBatch, ActiveSkillTexture, MyDest, NewBounds, Color, 0.0f, Vector2.Zero, Scale, SpriteEffects.None, 0.0f);
I know you can't see it, but it's basically wobbling up and done as it fills. I tried printing out the values for the destination, the newbounds rectangle, etc. and they all seemed to consistently increase and not "sway", so I'm not sure what's going on. Interestingly enough, if I fill it from the top, it doesn't happen. But that's probably because I don't have to do math to alter the destination position each time I draw it (because it should draw from the top-left corner each time).
Any help would be greatly appreciated. Thanks!
I think this would be easier if you set the Vector2.Origin parameter of your spriteBatch.Draw as the bottom-left of your texture.
In this way you simply increase your sourceRectangle.Height, with something like this:
sourceRectangle.Height = ActiveSkillTexture.Height * percentdone;
without doing that useless math to find the destination position.
So I'm making an asteroids like game for practice in XNA. I have a Large texture I'm using for a texture atlas as described in R.B.Whitaker's wiki here: http://rbwhitaker.wikidot.com/texture-atlases-1
I've now branched off from the wiki and am attempting to make collision detection for my ship and the alien. The problem is I need the current sprite from the atlas as a separate texture2d to where I can do accurate collision detection. I've read examples using the texture2d.GetData method, but I can't seem to get a working implementation. Any detailed implementations of that method or other options would be greatly appreciated.
Use Texture.GetData<Color>() on your atlas texture to get the array of colors representing individual pixels:
Color[] imageData = new Color[image.Width * image.Height];
image.GetData<Color>(imageData);
When you need a rectangle of data from the texture, use a method as such:
Color[] GetImageData(Color[] colorData, int width, Rectangle rectangle)
{
Color[] color = new Color[rectangle.Width * rectangle.Height];
for (int x = 0; x < rectangle.Width; x++)
for (int y = 0; y < rectangle.Height; y++)
color[x + y * rectangle.Width] = colorData[x + rectangle.X + (y + rectangle.Y) * width];
return color;
}
The method above throws index out of range exception if the rectangle is out of bounds of the original texture. width parameter is the width of the texture.
Usage with sourceRectangle of a specific sprite:
Color[] imagePiece = GetImageData(imageData, image.Width, sourceRectangle);
If you really need another texture (though I imagine you actually need color data for per pixel collision), you could do this:
Texture2D subtexture = new Texture2D(GraphicsDevice, sourceRectangle.Width, sourceRectangle.Height);
subtexture.SetData<Color>(imagePiece);
Side note - If you need to use color data from textures constantly for collision tests, save it as an array of colors, not as textures. Getting data from textures uses the GPU while your CPU waits, which degrades performance.
What I am looking to do sounds really simple, but no where on the Internet so far have I found a way to do this in DotNet nor found a 3rd party component that does this either (without spending thousands on completely unnecessary features).
Here goes:
I have a jpeg of a floor tile (actual photo) that I create a checkerboard pattern with.
In dotnet, it is easy to rotate and stitch photos together and save the final image as a jpeg.
Next, I want to take that final picture and make it appear as if the "tiles" are laying on a floor for a generic "room scene". Basically adding a 3D perspective to make it appear as if it is actually in the room scene.
Heres a website that is doing something similar with carpeting, however I need to do this in a WinForms application:
Flor Website
Basically, I need to create a 3D perspective of a jpeg, then save it as a new jpeg (then I can put an overlay of the generic room scene).
Anyone have any idea on where to get a 3rd party DotNet image processing module that can do this seemingly simple task?
It is not so simple because you need a 3D transformation, which is more complicated and computationally expensive than a simple 2D transformation such as rotation, scaling or shearing. For you to have an idea of the difference in the math, 2D transformations require 2 by 2 matrices, whereas a projection transformation (which is more complicated than other 3D transforms) requires a 4 by 4 matrix...
What you need is some 3D rendering engine in which you can draw polygons (in a perspective view) and them cover them with a texture (like a carpet). For .Net 2.0, I'd recommend using SlimDX which is a port of DirectX that would allow you to render polygons, but there is some learning curve. If you are using WPF (.Net 3.0 and up), there is a built in 3D canvas that allows you to draw textured polygons in perspective. That might be easier/better to learn than SlimDX for your purposes. I'm sure that there is a way to redirect the output of the 3D canvas towards a jpeg...
You might simplify the problem a lot if you don't require great performance and if you restrict the orientation of the texture (eg. always a horizontal floor or always a vertical wall). If so, you could probably render it yourself with a simple drawing loop in .Net 2.0.
If you just want a plain floor, your code would look like this. WARNING: Obtaining your desired results will take some significant time and refinement, specially if you don't know the math very well. But on the other hand, it is always fun to play with code of this type... (:
Find some sample images below.
using System;
using System.Collections.Generic;
using System.Drawing;
using System.Windows.Forms;
namespace floorDrawer
{
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
ResizeRedraw = DoubleBuffered = true;
Width = 800;
Height = 600;
Paint += new PaintEventHandler(Form1_Paint);
}
void Form1_Paint(object sender, PaintEventArgs e)
{
// a few parameters that control the projection transform
// these are the parameters that you can modify to change
// the output
double cz = 10; // distortion
double m = 1000; // magnification, usually around 1000 (the pixel width of the monitor)
double y0 = -100; // floor height
string texturePath = #"c:\pj\Hydrangeas.jpg";//#"c:\pj\Chrysanthemum.jpg";
// screen size
int height = ClientSize.Height;
int width = ClientSize.Width;
// center of screen
double cx = width / 2;
double cy = height / 2;
// render destination
var dst = new Bitmap(width, height);
// source texture
var src = Bitmap.FromFile(texturePath) as Bitmap;
// texture dimensions
int tw = src.Width;
int th = src.Height;
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
{
double v = m * y0 / (y - cy) - cz;
double u = (x - cx) * (v + cz) / m;
int uu = ((int)u % tw + tw) % tw;
int vv = ((int)v % th + th) % th;
// The following .SetPixel() and .GetPixel() are painfully slow
// You can replace this whole loop with an equivalent implementation
// using pointers inside unsafe{} code to make it much faster.
// Note that by casting u and v into integers, we are performing
// a nearest pixel interpolation... It's sloppy but effective.
dst.SetPixel(x, y, src.GetPixel(uu, vv));
}
// draw result on the form
e.Graphics.DrawImage(dst, 0, 0);
}
}
}
In C#.net I have a mesh cylinder with a dynamic diameter and length and am trying to map a texture to it. I have spent the better part of a day trying to find out how to do so but have had no success finding any information on Google.
The cylinders texture has a top area of the jpg and the side has the rest of the jpg.
I need to position the jpgs image edge along the top edge of the cylinder. eg. Red on top and green on side using one image.
Can anyone help me to map the VertexBuffer points to the texture?
C#.Net 2008
DirectX 9 (unmanaged)
I Have Posted My Working Solution Below
Although this tutorial is in VB it clearly explains the process.
Calculating the texture coordinates can be quite some work; that is why normally this is done by 3D modeling software so you can easily and, more importantly, visually adjust the mapping.
Let me know if you have any questions.
EDIT
For adding texture coordinates to the DirecxtX generated cylinder see this
Ok, I've finally figured it out. I had some code previously that was working but not exactly what I was wanting from
http://channel9.msdn.com/coding4fun/articles/Ask-the-ZMan-Applying-Textures-Part-3
Anyway, I just did some mods to it.
For reference and for those arriving from Google, here you go.
public static float ComputeBoundingSphere(Mesh mesh, out Microsoft.DirectX.Vector3 center)
{
// Lock the vertex buffer
Microsoft.DirectX.GraphicsStream data = null;
try
{
data = mesh.LockVertexBuffer(LockFlags.ReadOnly);
// Now compute the bounding sphere
return Geometry.ComputeBoundingSphere(data, mesh.NumberVertices,
mesh.VertexFormat, out center);
}
finally
{
// Make sure to unlock the vertex buffer
if (data != null)
mesh.UnlockVertexBuffer();
}
}
private static Mesh SetSphericalTexture(Mesh mesh)
{
Microsoft.DirectX.Vector3 vertexRay;
Microsoft.DirectX.Vector3 meshCenter;
double phi;
float u;
Microsoft.DirectX.Vector3 north = new Microsoft.DirectX.Vector3(0f, 0f, 1f);
Microsoft.DirectX.Vector3 equator = new Microsoft.DirectX.Vector3(0f, 1f, 0f);
Microsoft.DirectX.Vector3 northEquatorCross = Microsoft.DirectX.Vector3.Cross(north, equator);
ComputeBoundingSphere(mesh, out meshCenter);
using (VertexBuffer vb = mesh.VertexBuffer)
{
CustomVertex.PositionNormalTextured[] verts = (CustomVertex.PositionNormalTextured[])vb.Lock(0, typeof(CustomVertex.PositionNormalTextured), LockFlags.None, mesh.NumberVertices);
try
{
for (int i = 0; i < verts.Length; i++)
{
//For each vertex take a ray from the centre of the mesh to the vertex and normalize so the dot products work.
vertexRay = Microsoft.DirectX.Vector3.Normalize(verts[i].Position - meshCenter);
phi = Math.Acos((double)vertexRay.Z);
if (vertexRay.Z > -0.9)
{
verts[i].Tv = 0.121f; //percentage of the image being the top side
}
else
verts[i].Tv = (float)(phi / Math.PI);
if (vertexRay.Z == 1.0f || vertexRay.Z == -1.0f)
{
verts[i].Tu = 0.5f;
}
else
{
u = (float)(Math.Acos(Math.Max(Math.Min((double)vertexRay.Y / Math.Sin(phi), 1.0), -1.0)) / (2.0 * Math.PI));
//Since the cross product is just giving us (1,0,0) i.e. the xaxis
//and the dot product was giving us a +ve or -ve angle, we can just compare the x value with 0
verts[i].Tu = (vertexRay.X > 0f) ? u : 1 - u;
}
}
}
finally
{
vb.Unlock();
}
}
return mesh;
}