So I'm making an asteroids like game for practice in XNA. I have a Large texture I'm using for a texture atlas as described in R.B.Whitaker's wiki here: http://rbwhitaker.wikidot.com/texture-atlases-1
I've now branched off from the wiki and am attempting to make collision detection for my ship and the alien. The problem is I need the current sprite from the atlas as a separate texture2d to where I can do accurate collision detection. I've read examples using the texture2d.GetData method, but I can't seem to get a working implementation. Any detailed implementations of that method or other options would be greatly appreciated.
Use Texture.GetData<Color>() on your atlas texture to get the array of colors representing individual pixels:
Color[] imageData = new Color[image.Width * image.Height];
image.GetData<Color>(imageData);
When you need a rectangle of data from the texture, use a method as such:
Color[] GetImageData(Color[] colorData, int width, Rectangle rectangle)
{
Color[] color = new Color[rectangle.Width * rectangle.Height];
for (int x = 0; x < rectangle.Width; x++)
for (int y = 0; y < rectangle.Height; y++)
color[x + y * rectangle.Width] = colorData[x + rectangle.X + (y + rectangle.Y) * width];
return color;
}
The method above throws index out of range exception if the rectangle is out of bounds of the original texture. width parameter is the width of the texture.
Usage with sourceRectangle of a specific sprite:
Color[] imagePiece = GetImageData(imageData, image.Width, sourceRectangle);
If you really need another texture (though I imagine you actually need color data for per pixel collision), you could do this:
Texture2D subtexture = new Texture2D(GraphicsDevice, sourceRectangle.Width, sourceRectangle.Height);
subtexture.SetData<Color>(imagePiece);
Side note - If you need to use color data from textures constantly for collision tests, save it as an array of colors, not as textures. Getting data from textures uses the GPU while your CPU waits, which degrades performance.
Related
First of all, please understand that sentences may not be smooth using a translator.
I'm going to combine the two textures and create them into one texture.
Example) 1. image__ 2. image__ 3. result__
A simple combination of two textures is not a problem.
The problem is that translate, rotate and scale should be applied to one texture and merged, but I can't think of a way. I'd appreciate it if you could help me.
here my code :
Texture2D CombineTexutes(Texture2D _textureA, Texture2D _textureB, int startX = 0, int startY = 0)
{
//Create new textures
Texture2D textureResult = new Texture2D(_textureA.width, _textureA.height);
//create clone form texture
textureResult.SetPixels(_textureA.GetPixels());
//Now copy texture B in texutre A
for (int x = startX; x < _textureB.width + startX; x++)
{
for (int y = startY; y < _textureB.height + startY; y++)
{
Color c = _textureB.GetPixel(x - startX, y - startY);
if (c.a > 0.0f) //Is not transparent
{
//Copy pixel colot in TexturaA
textureResult.SetPixel(x, y, c);
}
}
}
//Apply colors
textureResult.Apply();
return textureResult;
}
This is relatively tricky with pixel transformations. You would have to read up on various algorithms for every of those tasks.
A much easier solution would be to use game objects with sprite renderers and then render everything onto a "render texture" (if you really need a texture).
Game objects with a sprite renderer can be easily rotated, scaled etc.
Your result image does look as if it was actually deforming the sprite though, not just scaled in a direction.
If that's your goal, then I'd recommend looking at the Animation 2D package. That allows you to do near anything with sprites.
I need to read image pixel colors, the image will be only Black and white. Therefore if the pixel is white i want to instantiate white cube and if pixel is black i want to instantiate black cube. Now This is all new to me so i made some digging and i ended up using system.Drawing and bitmaps. However now im stuck. I can't know how to check for the white pixel
For example
private void Pixelreader()
{
Bitmap img = new Bitmap(("ImageName.png");
for (int i = 0; i < img.Width; i++)
{
for (int j = 0; j < img.Height; j++)
{
System.Drawing.Color pixel = img.GetPixel(i, j);
if (pixel == *if image is white)
{
// instantiate white color.
}
}
}
}
is there any other way of doing this? Thanks!
If the image is truly black and white only (that is, all pixels are either equal to System.Drawing.Color.Black or System.Drawing.Color.White), then you could compare to these colors directly. Within the code you posted, it will look like this:
if (pixel == System.Drawing.Color.White)
{
//instantiate white color.
}
If the image is part of your Unity assets, a better approach is to read it using Resources. Place the image into Assets/Resources folder; then you can use the following code:
Texture2D image = (Texture2D)Resources.Load("ImageName.png");
If the image is entirely black or entirely white, no need to loop - just check one pixel:
if(image.GetPixel(0,0) == Color.White)
{
//Instantiate white cube
}
else
{
//Instantiate black cube
}
You can actually load an image as a resource into a Texture2D, then use UnityEngine.Texture2D and UnityEngine.Color.GrayScale to check if the color you get out is sufficiently close to white.
It sounds like you are going a bit overboard with it and instead could use features already built in to Unity. Try taking a look in to getting the pixel color during a ray cast.
if (Physics.Raycast (ray, hit)) {
var TextureMap: Texture2D = hit.transform.renderer.material.mainTexture;
var pixelUV = hit.textureCoord;
pixelUV.x *= TextureMap.width;
pixelUV.y *= TextureMap.height;
print ( "x=" + pixelUV.x + ",y=" + pixelUV.y + " " + TextureMap.GetPixel (pixelUV.x,pixelUV.y) );
Taken from here
So in order to get the Color[] data from a texture after it has been rotated in order to use this data for perpixel collisions, I use the following method to draw said texture (rotated) to a separate RenderTarget2D, then convert this back into a texture2D and get the color data from it:
public Color[] GetColorDataOf(SpriteBatch spriteBatch, Texture2D texture, float rotation)
{
// Get boundingBox of texture after rotation
Rectangle boundingBox = GetEnclosingBoundingBox(texture, rotation);
// Create new rendertarget of this size
RenderTarget2D buffer = new RenderTarget2D(GraphicsDevice, boundingBox.Width, boundingBox.Height);
// Change spritebatch to new rendertarget
GraphicsDevice.SetRenderTarget(buffer);
// Clear new rendertarget
GraphicsDevice.Clear(Color.Transparent);
// Draw sprite to new rendertarget
spriteBatch.Draw(texture, new Rectangle(boundingBox.Width / 2, boundingBox.Height / 2, texture.Width, texture.Height), null, Color.White, rotation, new Vector2(boundingBox.Center.X, boundingBox.Center.Y), SpriteEffects.None, 1f);
// Change rendertarget back to backbuffer
GraphicsDevice.SetRenderTarget(null);
// Get color data from the rendertarger
Color[] colorData = new Color[boundingBox.Width * boundingBox.Height];
Texture2D bufferTexture = (Texture2D)buffer;
bufferTexture.GetData(colorData);
return colorData;
}
Now I'm having two issues with that (I expect they are linked), firstly the texture gets drawn on screen, and all the Color[] data returned is empty (i.e all fields equal to 0).
** Edit **
Using Texture2D.SaveAsPng() I can see that bufferTexture is the correct size but just completely transparent indicating that the issue would lie in drawing to the buffer.
So I fixed it. Turns out I need to create another set of SpriteBatch.Begin() and SpriteBatch.End() calls around where I drew to the new rendertargets, otherwise it was just drawing to the backbuffer instead.
I am trying to make an HDR rendering pass in my 3D app.
I understand that in order to get the average light on a scene, you'd need to downsample the render output down to 1x1 texture. This is where I'm struggling with at the moment.
I have set up a 1x1 resolution render target to which I'm going to draw the previous render output. I have tried drawing the output to that render target using a simple SpriteBatch Draw call and using target rectangle. However it was too much to hope for I'd discover something nobody else thought of, the result of this was not actually entire scene downsampled to 1x1 texture, it appears as if only the top left pixel was being drawn, no matter how much I played with target rectangles or Scale overloads.
Now, I'm trying to do another screen-quad render pass, using a shader technique to sample out the scene, and render a single pixel into the render target. So to be fair, title is a bit misleading, what I'm trying to do is sample a grid of pixels spread evenly across the surface, and average those out. But this is where I'm stumped.
I have come across this tutorial:
http://www.xnainfo.com/content.php?content=28
In the file that can be downloaded, there are several examples of downsampling and one that I like most is using loop that goes through 16 pixels, averages them out and returns that.
Nothing I did so far managed to produce viable output. The downsampled texture is rendered to the corner of my screen for debugging purposes.
I have modified the HLSL code to look like this:
pixelShaderStruct vertShader(vertexShaderStruct input)
{
pixelShaderStruct output;
output.position = float4(input.pos, 1);
output.texCoord = input.texCoord + 0.5f;
return output;
};
float4 PixelShaderFunction(pixelShaderStruct input) : COLOR0
{
float4 color = 0;
float2 position = input.texCoord;
for(int x = 0; x < 4; x++)
{
for (int y = 0; y < 4; y++)
{
color += tex2D(getscene, position + float2(offsets[x], offsets[y]));
}
}
color /= 16;
return color;
}
This particular line here is where I believe I'm making the error:
color += tex2D(getscene, position + float2(offsets[x], offsets[y]));
I have never properly understood how the texCoord values, used in tex2D sampling, work.
When making motion blur effect, I had to forward such infinitismal values that I was afraid it'd get rounded off to zero, in order to produce a normal looking effect, while other times, forwarding large values like 30 and 50 was neccessary to produce effects that occupy maybe one third of the screen.
So anyway, my question is:
Given a screen quad (so, flat surface), how do I increment or modify the texCoord values to have a grid of pixels, evenly spread out, sampled across it?
I have tried using:
color += tex2D(getscene, position + float2(offsets[x] * (1/maxX), offsets[y] * (1/maxY)));
Where maxX and maxY are screen resolution,
color += tex2D(getscene, position + float2(offsets[x] * x, offsets[y] * y));
...and other "shots in the dark", and all results have ended up being the same: the final resulting pixel appears to be identical to the one in the exact middle of my screen, as if that's the only one being sampled.
How to solve that?
Also, how do those texture coordinates work? Where is (0,0)? What's the maximum?
Thanks all in advance.
I have solved the problem.
I believed that using a dozen renderTargets which half the resolution each step would be expensive to do, but I was wrong.
On a middle-end GPU in 2013, nVidia GTX 560, the cost of rerendering to 10 render targets was not noticeable, specific numbers: from 230 FPS the performance dropped to some 220 FPS.
The solution follows. It is implied you already have your entire scene processed and rendered to a renderTarget, which in my case is "renderOutput".
First, I declare a renderTarget array:
public RenderTarget2D[] HDRsampling;
Next, I calculate how many targets I will need in my Load() method, which is called between the Menu update and Game update loops (transition state for loading game assets not required in menu), and initialize them properly:
int counter = 0;
int downX = Game1.maxX;
int downY = Game1.maxY;
do
{
downX /= 2;
downY /= 2;
counter++;
} while (downX > 1 && downY > 1);
HDRsampling = new RenderTarget2D[counter];
downX = Game1.maxX / 2;
downY = Game1.maxY / 2;
for (int i = 0; i < counter; i++)
{
HDRsampling[i] = new RenderTarget2D(Game1.graphics.GraphicsDevice, downX, downY);
downX /= 2;
downY /= 2;
}
And finally, C# rendering code is as follows:
if (settings.HDRpass)
{ //HDR Rendering passes
//Uses Hardware bilinear downsampling method to obtain 1x1 texture as scene average
Game1.graphics.GraphicsDevice.SetRenderTarget(HDRsampling[0]);
Game1.graphics.GraphicsDevice.Clear(ClearOptions.Target, Color.Black, 0, 0);
downsampler.Parameters["maxX"].SetValue(HDRsampling[0].Width);
downsampler.Parameters["maxY"].SetValue(HDRsampling[0].Height);
downsampler.Parameters["scene"].SetValue(renderOutput);
downsampler.CurrentTechnique.Passes[0].Apply();
quad.Render();
for (int i = 1; i < HDRsampling.Length; i++)
{ //Downsample the scene texture repeadetly until last HDRSampling target, which should be 1x1 pixel
Game1.graphics.GraphicsDevice.SetRenderTarget(HDRsampling[i]);
Game1.graphics.GraphicsDevice.Clear(ClearOptions.Target, Color.Black, 0, 0);
downsampler.Parameters["maxX"].SetValue(HDRsampling[i].Width);
downsampler.Parameters["maxY"].SetValue(HDRsampling[i].Height);
downsampler.Parameters["scene"].SetValue(HDRsampling[i-1]);
downsampler.CurrentTechnique.Passes[0].Apply();
quad.Render();
}
//assign the 1x1 pixel
downsample1x1 = HDRsampling[HDRsampling.Length - 1];
Game1.graphics.GraphicsDevice.SetRenderTarget(extract);
//switch out rendertarget so we can send the 1x1 sample to the shader.
bloom.Parameters["downSample1x1"].SetValue(downsample1x1);
}
This obtains the downSample1x1 texture, which is later used in the final pass of the final shader.
The shader code for actual downsampling is barebones simple:
texture2D scene;
sampler getscene = sampler_state
{
texture = <scene>;
MinFilter = linear;
MagFilter = linear;
MipFilter = point;
MaxAnisotropy = 1;
AddressU = CLAMP;
AddressV = CLAMP;
};
float maxX, maxY;
struct vertexShaderStruct
{
float3 pos : POSITION0;
float2 texCoord : TEXCOORD0;
};
struct pixelShaderStruct
{
float4 position : POSITION0;
float2 texCoord : TEXCOORD0;
};
pixelShaderStruct vertShader(vertexShaderStruct input)
{
pixelShaderStruct output;
float2 offset = float2 (0.5 / maxX, 0.5/maxY);
output.position = float4(input.pos, 1);
output.texCoord = input.texCoord + offset;
return output;
};
float4 PixelShaderFunction(pixelShaderStruct input) : COLOR0
{
return tex2D(getscene, input.texCoord);
}
technique Sample
{
pass P1
{
VertexShader = compile vs_2_0 vertShader();
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
How you implement your scene average luminosity is up to you, I'm still experimenting with all that, but I hope this helps somebody out there!
The tutorial you linked is a good demonstration of how to do Tonemapping, but it sounds to me like you haven't performed the repeated downsampling that's required to get to a 1x1 image.
You can't simply take a high resolution image (1920x1080, say) take 16 arbitrary pixels, add them up and divide by 16 and call that your luminance.
You need to repeatedly downsample the source image to smaller and smaller textures, usually by half in each dimension at every stage. Each pixel of the resulting downsample is an average of a 2x2 grid of pixels on the previous texture (this is handled by bilinear sampling). Eventually you'll end up with a 1x1 image that is the average colour value for the entire original 1920x1080 image and from that you can calculate the average luminance of the source image.
Without repeated downsampling, your luminance calculation is going to be a very noisy value since its input is a mere 16 of the ~2M pixels in the original image. To get a correct and smooth luminance every pixel in the original image needs to contribute.
Question was answered. For more information, check out EDIT #4 at the end of this text.
We are currently working on a gamemaking engine which is going pretty well. I am working on the Animation Creator and was wondering if it was possible to draw an image with additive blending.
Let me explain.
We use System.Drawing library of C# and we work with Windows Forms. For now, the user is able to create his animation by importing a framed animation image (an image containing every frame of the animation) and the user is able to drag and drop these frames wherever he wants.
The actual problem is that we can't figure out how to draw a frame with the additive blending.
Here's an exemple of what Additive Blending is if you don't quite get it. I won't blame you, I have a hard time writing in english.
We are using the following method to draw on a Panel or directly on the form. For exemple here's the code to draw a tiled map for the map editor. Since the AnimationManager code is a mess, it'll be clearer with this exemple.
using (Graphics g = Graphics.FromImage(MapBuffer as Image))
using (Brush brush = new SolidBrush(Color.White))
using (Pen pen = new Pen(Color.FromArgb(255, 0, 0, 0), 1))
{
g.FillRectangle(brush, new Rectangle(new Point(0, 0), new Size(CurrentMap.MapSize.Width * TileSize, CurrentMap.MapSize.Height * TileSize)));
Tile tile = CurrentMap.Tiles[l, x, y];
if (tile.Background != null) g.DrawImage(tile.Background, new Point(tile.X * TileSize, tile.Y * TileSize));
g.DrawRectangle(pen, x * TileSize, y * TileSize, TileSize, TileSize);
}
Is there a possible way of drawing an image with an additive drawing and if so, I'd be forever grateful if someone could point me out how. Thank you.
EDIT #1 :
For drawing images, we are using a color matrix to set hue and alph (opacity) like this:
ColorMatrix matrix = new ColorMatrix
(
new Single[][]
{
new Single[] {r, 0, 0, 0, 0},
new Single[] {0, g, 0, 0, 0},
new Single[] {0, 0, b, 0, 0},
new Single[] {0, 0, 0, a, 0},
new Single[] {0, 0, 0, 0, 1}
}
);
Maybe the color matrix can be used for additive blending?
EDIT #2 :
Just found this article by Mahesh Chand.
After further browsing, it may not be possible with a color matrix even though it can accomplish a lot regarding color transformations.
I will answer my own question if solution found.
Thank you for you help.
EDIT #3 :
XNA has a lot of documentation here about blending. I found the formula used to accomplish additive blending on each pixels of an image.
PixelColor = (source * [1, 1, 1, 1]) + (destination * [1, 1, 1, 1])
Maybe there's a way of using this formula in the current context?
I will start a 50 bounty on next edit, we really need this to work.
Thank you again for your time.
EDIT #4
Thanks to axon, now the problem is solved. Using XNA and its Spritebatch, you can accomplish Additive blending doing so :
First of all you create a GraphicsDevice and a SpriteBatch
// In the following example, we want to draw inside a Panel called PN_Canvas.
// If you want to draw directly on the form, simply use "this" if you
// write the following code in your form class
PresentationParameters pp = new PresentationParameters();
// Replace PN_Canvas with the control to be drawn on
pp.BackBufferHeight = PN_Canvas.Height;
pp.BackBufferWidth = PN_Canvas.Width;
pp.DeviceWindowHandle = PN_Canvas.Handle;
pp.IsFullScreen = false;
device = new GraphicsDevice(GraphicsAdapter.DefaultAdapter, GraphicsProfile.Reach, pp);
batch = new SpriteBatch(device);
Then, when it's time to draw on the control or on the form (with the OnPaint event for example), you can use the following code block
// You should always clear the GraphicsDevice first
device.Clear(Microsoft.Xna.Framework.Color.Black);
// Note the last parameter of Begin method
batch.Begin(SpriteSortMode.BackToFront, BlendState.Additive);
batch.draw( /* Things you want to draw, positions and other infos */ );
batch.End();
// The Present method will draw buffer onto control or form
device.Present();
Either use 1) XNA (recommended for speed), or 2) use pixel-operations in C#. There may be other methods, but either of these work (I'm using each of them for 3D effects and image analysis apps (respectively) that I maintain).
Pixel Operations in C#:
Using 3 bitmaps; bmpA, bmpB, bmpC, where you want to store bmpA+bmpB in bmpC.
for (int y = 0; y < bmp.Height; y++)
{
for (int x = 0; x < bmp.Width; x++)
{
Color cA = bmpA.GetPixel(x,y);
Color cB = bmpB.GetPixel(x,y);
Color cC = Color.FromArgb(cA.A, cA.R + cB.R, cA.G + cB.G, cA.B + cB.B);
bmpC.SetPixel(x, y, cC);
}
}
The above code is very slow. A faster solution in C# could use pointers like this:
// Assumes all bitmaps are the same size and same pixel format
BitmapData bmpDataA = bmpA.LockBits(new Rectangle(0, 0, bmpA.Width, bmpA.Height), ImageLockMode.ReadOnly, bmpA.PixelFormat);
BitmapData bmpDataB = bmpB.LockBits(new Rectangle(0, 0, bmpA.Width, bmpA.Height), ImageLockMode.ReadOnly, bmpA.PixelFormat);
BitmapData bmpDataC = bmpC.LockBits(new Rectangle(0, 0, bmpA.Width, bmpA.Height), ImageLockMode.WriteOnly, bmpA.PixelFormat);
void* pBmpA = bmpDataA.Scan0.ToPointer();
void* pBmpB = bmpDataB.Scan0.ToPointer();
void* pBmpC = bmpDataC.Scan0.ToPointer();
int bytesPerPix = bmpDataA.Stride / bmpA.Width;
for (int y = 0; y < bmp.Height; y++)
{
for (int x = 0; x < bmp.Width; x++, pBmpA += bytesPerPix, pBmpB += bytesPerPix, pBmpC += bytesPerPix)
{
*(byte*)(pBmpC) = *(byte*)(pBmpA) + *(byte*)(pBmpB); // R
*(byte*)(pBmpC + 1) = *(byte*)(pBmpA + 1) + *(byte*)(pBmpB + 1); // G
*(byte*)(pBmpC + 2) = *(byte*)(pBmpA + 2) + *(byte*)(pBmpB + 2); // B
}
}
bmpA.UnlockBits(bmpDataA);
bmpB.UnlockBits(bmpDataB);
bmpC.UnlockBits(bmpDataC);
The above method requires pointers and hence must be compiled with the "unsafe" directive. Also assumes 1-byte for each of R,G, and B. Change the code to suit your pixel format.
Using XNA is a lot faster (performance) since it is hardware accelerated (by the GPU). It basically consists of the following:
1. Create the geometry needed to draw the image (a rectangle, most likely a full-screen quad).
2. Write a vertex-shader and pixel-shader. The vertex-shader can simply pass-through the geometry unmodified. Or you can apply an orthogonal projection (depending on what coordinates you want to work with for the quad). The pixel shader will have the following lines (HLSL):
float4 ps(vertexOutput IN) : COLOR
{
float3 a = tex2D(ColorSampler,IN.UV).rgb;
float3 b = tex2D(ColorSampler2,IN.UV).rgb;
return float4(a + b,1.0f);
}
There are different methods available for accessing textures. The following will also work (depending on how you want the XNA code to bind to the shader parameters):
float4 ps(vertexOutput IN) : COLOR
{
float3 a = texA.Sample(samplerState, IN.UV).xyz;
float3 b = texB.Sample(samplerState, IN.UV).xyz;
return float4(a + b,1.0f);
}
Which of the above shaders you use will depend on whether you want to use the "sampler2D" or "texture" HLSL interfaces to access the textures.
You should also be careful to use an appropriate sampler setting to ensure that no sampling (e.g. linear interpolation) is used when looking up colour values unless that's something you want (in which case use something higher-quality/higher-order).
XNA also has built-in BlendStates you can use to specify how overlapped textures will be combined. I.e. BlendState.Additive (see updated original post).