I have a Texture2D that I'm loading from the Content Pipeline. That's working fine, but as soon as I try to use SetData on a completely different Texture2D all of the textures in my game go completely black:
This is in my HUDMeter class, the class that I want to be just red
Texture2D colorGrad = Content.Load<Texture2D>(GradientAsset);
Color[,] pixels = new Color[colorGrad.Width, colorGrad.Height];
Color[] pixels1D = new Color[colorGrad.Width * colorGrad.Height];
pixels = GetRedChannel(colorGrad);
pixels1D = Color2DToColor1D(pixels, colorGrad.Width);
System.Diagnostics.Debug.WriteLine(pixels[32,32]);
Gradient = colorGrad;
Gradient.SetData<Color>(pixels1D);
These are using Riemers tutorial
protected Color[,] GetRedChannel(Texture2D texture)
{
Color[,] pixels = TextureTo2DArray(texture);
Color[,] output = new Color[texture.Width, texture.Height];
for (int x = 0; x < texture.Width; x++)
{
for (int y = 0; y < texture.Height; y++)
{
output[x,y] = new Color(pixels[x,y].G, 0, 0);
}
}
return output;
}
protected Color[,] TextureTo2DArray(Texture2D texture)
{
Color[] colors1D = new Color[texture.Width * texture.Height];
texture.GetData(colors1D);
Color[,] colors2D = new Color[texture.Width, texture.Height];
for (int x = 0; x < texture.Width; x++)
for (int y = 0; y < texture.Height; y++)
colors2D[x, y] = colors1D[x + y * texture.Width];
return colors2D;
}
private Color[] Color2DToColor1D (Color[,] colors, int width)
{
Color[] output = new Color[colors.Length];
for (int x = 0; x < width; x++)
{
for (int y = 0; y < colors.Length / width; y++)
{
output[x + y * width] = colors[x % width, y % (colors.Length/width)];
}
}
return output;
}
And here is the code to draw the sprites, this works fine and is how I always draw sprites:
batch.Draw(meter.Gradient, new Vector2(X, Y), Color.White);
Update:
I've actually found that the sprites that don't use the same file are not black. Does Texture2D.SetData<>() actually change the file itself? what is the use of that?
Update:
I just tried to use the Alpha as well as RGB and it's working. I'm thinking that there's something wrong with one of the conversion methods.
If you do this:
Texture2D textureA = Content.Load<Texture2D>("MyTexture");
Texture2D textureB = Content.Load<Texture2D>("MyTexture");
Both textureA and textureB refer to the same object. So if you call SetData on one of them, it will affect both of them. This is because ContentManager keeps an internal list of resources already loaded, so it doesn't have to keep reloading the same resource.
The solution would be to create a new Texture2D object of the same size, call GetData on the one loaded by ContentManager, and then SetData on the new texture.
Example (not tested):
Color[] buffer = new Color[textureA.Width * textureA.Height];
Texture2D textureB = new Texture2D(textureA.GraphicsDevice,
textureA.Width,
textureA.Height);
textureA.GetData(buffer);
textureB.SetData(buffer);
Dispose() of the new texture when you are finished with it (eg: in your Game.UnloadContent method). But never dispose of the one loaded by ContentManager (because, like I said, it is a shared object; use ContentManager.Unload instead).
Related
I could use just a little help. I am loading a png into a Texture2D, and have managed to flip it over the y axis using the following script I found. I need to flip it over the x axis now. I know a small modification should do it, but I have not managed to get the results I want.
Texture2D FlipTexture(Texture2D original){
Texture2D flipped = new Texture2D(original.width,original.height);
int xN = original.width;
int yN = original.height;
for(int i=0;i<xN;i++){
for(int j=0;j<yN;j++){
flipped.SetPixel(xN-i-1, j, original.GetPixel(i,j));
}
}
flipped.Apply();
return flipped;
}
say "pix" is a png,
Texture2D photo;
Color[] pix = photo.GetPixels(startAcross,0, 256,256);
// (256 is just an example size)
this ENTIRELY ROTATES a png 180 degrees
System.Array.Reverse(pix, 0, pix.Length);
this mirrors a PNG just around the upright axis
for(int row=0;row<256;++row)
System.Array.Reverse(pix, row*256, 256);
Texture2D FlipTexture(Texture2D original, bool upSideDown = true)
{
Texture2D flipped = new Texture2D(original.width, original.height);
int xN = original.width;
int yN = original.height;
for (int i = 0; i < xN; i++)
{
for (int j = 0; j < yN; j++)
{
if (upSideDown)
{
flipped.SetPixel(j, xN - i - 1, original.GetPixel(j, i));
}
else
{
flipped.SetPixel(xN - i - 1, j, original.GetPixel(i, j));
}
}
}
flipped.Apply();
return flipped;
}
To call it:
FlipTexture(camTexture, true); //Upside down
FlipTexture(camTexture, false); //Sideways
This flips the texture upside down:
int width = texture.width;
int height = texture.height;
Texture2D snap = new Texture2D(width, height);
Color[] pixels = texture.GetPixels();
Color[] pixelsFlipped = new Color[pixels.Length];
for (int i = 0; i < height; i++)
{
Array.Copy(pixels, i*width, pixelsFlipped, (height-i-1) * width , width);
}
snap.SetPixels(pixelsFlipped);
snap.Apply();
I am trying to create a program that accepts an image, recursively goes through each pixel, normalizes the pixel and re-creates a NEW image that looks the same as the original, but has normalized pixels instead.
public void parseJpeg(String jpegPath)
{
var normalizedRed = 0.0;
var normalizedGreen = 0.0;
var normalizedBlue = 0.0;
Bitmap normalizedImage = null;
var image = new Bitmap(jpegPath);
normalizedImage = new Bitmap(image.Width, image.Height);
for (int x = 0; x < image.Width; ++x)
{
for (int y = 0; y < image.Height; ++y)
{
Color color = image.GetPixel(x, y);
double exponent = 2;
double redDouble = Convert.ToDouble(color.R);
double blueDouble = Convert.ToDouble(color.B);
double greenDouble = Convert.ToDouble(color.G);
double redResult = Math.Pow(redDouble, exponent);
double blueResult = Math.Pow(blueDouble, exponent);
double greenResult = Math.Pow(greenDouble, exponent);
double totalResult = redResult + blueResult + greenResult;
normalizedRed = Convert.ToDouble(color.R) / Math.Sqrt(totalResult);
normalizedGreen = Convert.ToDouble(color.G) / Math.Sqrt(totalResult);
normalizedBlue = Convert.ToDouble(color.B) / Math.Sqrt(totalResult);
Color newCol = Color.FromArgb(Convert.ToInt32(normalizedRed), Convert.ToInt32(normalizedGreen), Convert.ToInt32(normalizedBlue));
normalizedImage.SetPixel(x, y, newCol);
}
}
normalizedImage.Save("C:\\Users\\username\\Desktop\\test1.jpeg");
resultsViewBox.AppendText("Process completed.\n");
}
Using the above code produces all black pixels and I do not understand why. When it normalizes it sets RGB = 1. After normalization, how do I set pixels with the NEW normalized value?
When I perform the below code, I get a black and blue image in my preview, but when I open the file it's blank. This is better than what I was getting before, which was ALL black pixels. This only works on one image though. So I am not sure how much of a step forward it is.
public void parseJpeg(String jpegPath)
{
Bitmap normalizedImage = null;
var image = new Bitmap(jpegPath);
normalizedImage = new Bitmap(image.Width, image.Height);
for (int x = 0; x < image.Width; ++x)
{
for (int y = 0; y < image.Height; ++y)
{
Color color = image.GetPixel(x, y);
float norm = (float)System.Math.Sqrt(color.R * color.R + color.B * color.B + color.G * color.G);
Color newCol = Color.FromArgb(Convert.ToInt32(norm));
normalizedImage.SetPixel(x, y, newCol);
}
}
normalizedImage.Save("C:\\Users\\username\\Desktop\\test1.jpeg");
resultsViewBox.AppendText("Process completed.\n");
}
I found the code for what I was trying to do:
http://www.lukehorvat.com/blog/normalizing-image-brightness-in-csharp/
public void parseJpeg(String jpegPath)
{
var image = new Bitmap(jpegPath);
normalizedImage = new Bitmap(image.Width, image.Height);
for (int x = 0; x < image.Width; ++x)
{
for (int y = 0; y < image.Height; ++y)
{
float pixelBrightness = image.GetPixel(x, y).GetBrightness();
minBrightness = Math.Min(minBrightness, pixelBrightness);
maxBrightness = Math.Max(maxBrightness, pixelBrightness);
}
}
for (int x = 0; x < image.Width; x++)
{
for (int y = 0; y < image.Height; y++)
{
Color pixelColor = image.GetPixel(x, y);
float normalizedPixelBrightness = (pixelColor.GetBrightness() - minBrightness) / (maxBrightness - minBrightness);
Color normalizedPixelColor = ColorConverter.ColorFromAhsb(pixelColor.A, pixelColor.GetHue(), pixelColor.GetSaturation(), normalizedPixelBrightness);
normalizedImage.SetPixel(x, y, normalizedPixelColor);
}
}
normalizedImage.Save("C:\\Users\\username\\Desktop\\test1.jpeg");
resultsViewBox.AppendText("Process completed.\n");
}
You are creating a new Bitmap and saving over the file for every pixel in your image. Move the
normalizedImage = new Bitmap(image.Width, image.Height);
line to before your loops, and the
normalizedImage.Save("C:\\Users\\username\\Desktop\\test1.jpeg");
line to after your loops.
Your normalization algorithm does not appear to be correct. Let's say your original color was red (255,0,0) Then your totalResult will be 65025, and your normalizedRed will be 255/sqrt(65025), which is 1, giving you a new normalized color of (1,0,0), which is essentially black.
Just as a note, your code will run a bit faster if you define all the doubles once outside the look and then assign them within the loop rather than defining and deleting each of the 8 doubles each iteration
Instead of messing with the colors you should use the brightness or luminosity factor to achieve normalization. Here is a link to the already answered question that can help you. you can convert each RGB pixel to HSL and minupulate L factor:
How do I normalize an image?
The code that you shared is actually a trim down version of HSL manipulation.
I'm trying to create a collision detection method for a simple XNA racing game, for which I am using this tutorial on how to extract texture data. What I'm trying to do is to check if any of the colors in that area of the texture are blue (which is the color of the walls on my racing track). However, I keep getting the error in the title. Can anyone explain to me why this happens?
code:
public bool Collision()
{
int width = arrow.Width; //arrow is the name of my "car" texture (it's an arrow)
int height = arrow.Height;
int xr = (int)x; // x is the x position of my arrow
int yr = (int)y; // y is the y position of my arrow
Color[] rawData = new Color[width * height];
Rectangle extractRegion = new Rectangle(xr, yr, width, height);
track.GetData<Color>(0, extractRegion, rawData, 0, width * height); //error occurs here
Color[,] rawDataAsGrid = new Color[height, width];
for (int row = 0; row < height; row++)
{
for (int column = 0; column < width; column++)
{
rawDataAsGrid[row, column] = rawData[row * width + column];
}
}
for (int x1 = (int)x; x1 < width; x1++)
{
for (int y1 = (int)y; y1 < height; y1++)
{
if (rawDataAsGrid[x1, y1] == Color.Blue)
{
return true;
}
}
}
return false;
}
edit: I got it working!
Your rawData is not of sufficient length to receive the data you attempt to get with the GetData() method.
Change this line:
Color[] rawData = new Color[width * height];
into:
Color[] rawData = new Color[track.Width * track.Height];
And that should do it. Hope it helps!
Similar to many programs that take a tiled map, like that in the game Terraria, and turn the map into a single picture of the entire map, I am trying to do something similar. The problem is, my block textures are in a single large texture atlas and are referenced by index, and I am having trouble taking the color data from a single block and placing it into the correct place in the larger texture.
This is my code so far.
Getting the source from the index (this code works):
public static Rectangle GetSourceForIndex(int index, Texture2D tex)
{
int dim = tex.Width / TEXTURE_MAP_DIM;
int startx = index % TEXTURE_MAP_DIM;
int starty = index / TEXTURE_MAP_DIM;
return new Rectangle(startx * dim, starty * dim, dim, dim);
}
Getting the texture at the index (Where the problems start):
public static Texture2D GetTextureAtIndex(int index, Texture2D tex)
{
Rectangle source = GetSourceForIndex(index, tex);
Texture2D texture = new Texture2D(_device, source.Width, source.Height);
Color[] colors = new Color[tex.Width * tex.Height];
tex.GetData<Color>(colors);
Color[] colorData = new Color[source.Width * source.Height];
for (int x = 0; x < source.Width; x++)
{
for (int y = 0; y < source.Height; y++)
{
colorData[x + y * source.Width] = colors[x + source.X + (y + source.Y) * tex.Width];
}
}
texture.SetData<Color>(colorData);
return texture;
}
Putting the texture into the larger picture (this is completely wrong I'm sure):
private void doSave()
{
int texWidth = this._rWidth * Region.REGION_DIM * 16;
int texHeight = this._rHeight * Region.REGION_DIM * 16;
Texture2D picture = new Texture2D(Game.GraphicsDevice, texWidth, texHeight);
Color[] pictureData = new Color[picture.Width * picture.Height];
for (int blockX = 0; blockX < texWidth / 16; blockX++)
{
for (int blockY = 0; blockY < texHeight / 16; blockY++)
{
Block b = this.GetBlockAt(blockX, blockY);
Texture2D toCopy = TextureManager.GetTextureAtIndex(b.GetIndexBasedOnMetadata(b.GetMetadataForSurroundings(this, blockX, blockY)), b.GetTextureFile());
Color[] copyData = new Color[toCopy.Width * toCopy.Height];
Rectangle source = new Rectangle(blockX * 16, blockY * 16, 16, 16);
toCopy.GetData<Color>(copyData);
for (int x = 0; x < source.Width; x++)
{
for (int y = 0; y < source.Height; y++)
{
pictureData[x + source.X + (y + source.Y) * picture.Width] = copyData[x + y * source.Width];
}
}
}
}
picture.SetData<Color>(pictureData);
string fileName = "picture" + DateTime.Now.ToString(#"MM\-dd\-yyyy-h\-mm-tt");
FileStream stream = File.Open(this.GetSavePath() + #"Pictures\" + fileName, FileMode.OpenOrCreate);
picture.SaveAsPng(stream, picture.Width, picture.Height);
I can't find any good descriptions on how to properly convert between the texture and a one dimensional color array. It would be much easier if I knew how to easily and properly place a square of colors into a larger two dimensional texture.
TL;DR: How do you put a smaller Texture into a larger texture?
Create a RenderTarget2D the size of you largest texture and set it to active. Draw the large texture then draw the smaller one. Set the reference to the original texture to the RenderTarget2D you just drew to.
int texWidth = this._rWidth * Region.REGION_DIM * 16;
int texHeight = this._rHeight * Region.REGION_DIM * 16;
_renderTarget = new RenderTarget2D(GraphicsDevice, texWidth, texHeight);
GraphicsDevice.SetRenderTarget(_renderTarget);
GraphicsDevice.Clear(Color.Transparent);
_spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque);
for (int blockX = 0; blockX < texWidth / 16; blockX++)
{
for (int blockY = 0; blockY < texHeight / 16; blockY++)
{
_spriteBatch.Draw(
TextureManager.GetTextureAtIndex(b.GetIndexBasedOnMetadata(b.GetMetadataForSurroundings(this, blockX, blockY)), b.GetTextureFile()),
new Rectangle(blockX * 16, blockY * 16, 16, 16),
Color.White);
}
}
_spriteBatch.End()
GraphicsDevice.SetRenderTarget(null);
var picture = _renderTarget;
I am writing to a Graphics object dynamically and don't know the actual size of the final image until all output is passed.
So, I create a large image and create Graphics object from it:
int iWidth = 600;
int iHeight = 2000;
bmpImage = new Bitmap(iWidth, iHeight);
graphics = Graphics.FromImage(bmpImage);
graphics.Clear(Color.White);
How can I find the actual size of written content, so I will be able to create a new bitmap with this size and copy the content to it.
It is really hard to calculate the content size before drawing it and want to know if there is any other solution.
The best solution is probably to keep track of the maximum X and Y values that get used as you draw, though this will be an entirely manual process.
Another option would be to scan full rows and columns of the bitmap (starting from the right and the bottom) until you encounter a non-white pixel, but this will be a very inefficient process.
int width = 0;
int height = 0;
for(int x = bmpImage.Width - 1, x >= 0, x++)
{
bool foundNonWhite = false;
width = x + 1;
for(int y = 0; y < bmpImage.Height; y++)
{
if(bmpImage.GetPixel(x, y) != Color.White)
{
foundNonWhite = true;
break;
}
}
if(foundNonWhite) break;
}
for(int y = bmpImage.Height - 1, x >= 0, x++)
{
bool foundNonWhite = false;
height = y + 1;
for(int x = 0; x < bmpImage.Width; x++)
{
if(bmpImage.GetPixel(x, y) != Color.White)
{
foundNonWhite = true;
break;
}
}
if(foundNonWhite) break;
}
Again, I don't recommend this as a solution, but it will do what you want without your having to keep track of the coordinate space that you actually use.
Just check the value of these properties
float width = graphics.VisibleClipBounds.Width;
float height = graphics.VisibleClipBounds.Height;
A RectangleF structure that represents a bounding rectangle for the visible clipping region of this Graphics.