I'm looking for a way to improve the performance of some drawing I am doing. Currently it is a 32x32 grid of tiles that I am drawing. Using the following code to draw onto the drawing context
for (int x = startX; x < endX; x++)
{
for (int y = startY; y < endY; y++)
{
dg.Children.Add(
new ImageDrawing(_mapTiles[GameWorldObject.GameMap[x, y].GraphicsTile.TileStartPoint],
new Rect(CountX * 8, CountY * 8, 8, 8)
));
dg.Children.Add(
new GeometryDrawing(
null,
new Pen(
new SolidColorBrush(
Color.FromRgb(255, 0, 20)), .3),
new RectangleGeometry(
new Rect(CountX * 8, CountY * 8, 8, 8)
)
)
);
CountY++;
}
CountY = 0;
CountX++;
}
dc.DrawDrawing(dg);
The Image I am drawing is a CachedBitmap. Even using a CachedBitmap, I still have a delay of about a half second each time I need to redraw the Canvas.
Not sure if there is a more performant way to handle drawing to this grid. Eventually I want to expand control to function as a mini-map, so I need to keep that in mind.
Also, I tried previously to just draw each bitmap directly to the drawing context but that seems a bit slower.
I added DrawingGroup.Freeze() before drawing, and it seemed to help with the performance.
If it's mostly static minimap draw it into an Image, and draw that image. Or you can do a big image, where you draw the whole map into it, and just draw the current visible part of it.
Edit: And maybe this blog post worth a check, whether you are drawing it with software or hardware acceleration on.
Here's a example using WriteableBitmap, the performance of this is mainly related to the size of the whole map whereas your original method is more dependent on the amount of tiles. You could alter it to have an alpha-blended border between the tiles but leaving a gap between them would be easier and more performant. You won't need the code randomising the tiles but you should have some dirty flag so you only redraw the bitmap when its changed.
You may also want to look my answer and the others to this question. That said you don't have as many items and 32x32 using your method wasn't slow for me.
<local:Map x:Name="map" />
<RepeatButton Click="Button_Click" Content="Change" />
private void Button_Click(object sender, RoutedEventArgs e)
{
map.seed++;
map.InvalidateVisual();
}
public class Map : FrameworkElement
{
private int[][] _mapTiles;
public Map()
{
_mapTiles = Directory.GetFiles(#"C:\Users\Public\Pictures\Sample Pictures", "*.jpg").Select(x =>
{
var b = new BitmapImage(new Uri(x));
var transform = new TransformedBitmap(b, new ScaleTransform((1.0 / b.PixelWidth)*tileSize,(1.0 / b.PixelHeight)*tileSize));
var conv = new FormatConvertedBitmap(transform, PixelFormats.Pbgra32, null, 0);
int[] data = new int[tileSize * tileSize];
conv.CopyPixels(data, tileSize * 4, 0);
return data;
}).ToArray();
bmp = new WriteableBitmap(w * tileSize, h * tileSize, 96, 96, PixelFormats.Pbgra32, null);
destData = new int[bmp.PixelWidth * bmp.PixelHeight];
}
const int w = 64, h = 64, tileSize = 8;
public int seed = 72141;
private int oldSeed = -1;
private WriteableBitmap bmp;
int[] destData;
protected override void OnRender(DrawingContext dc)
{
if(seed != oldSeed)
{
oldSeed = seed;
int startX = 0, endX = w;
int startY = 0, endY = h;
Random rnd = new Random(seed);
for(int x = startX; x < endX; x++)
{
for(int y = startY; y < endY; y++)
{
var tile = _mapTiles[rnd.Next(_mapTiles.Length)];
var rect = new Int32Rect(x * tileSize, y * tileSize, tileSize, tileSize);
for(int sourceY = 0; sourceY < tileSize; sourceY++)
{
int destY = ((rect.Y + sourceY) * (w * tileSize)) + rect.X;
Array.Copy(tile, sourceY * tileSize, destData, destY, tileSize);
}
}
}
bmp.WritePixels(new Int32Rect(0, 0, w * tileSize, h * tileSize), destData, w * tileSize * 4, 0);
}
dc.DrawImage(bmp,new Rect(0,0,w*tileSize,h*tileSize));
}
}
Related
I have an Bitmap with various color patterns and I need to find the bounding rectangles of one given color (For example: Red) within the Bitmap. I found some code to process images but unable to figure out how to achieve this.
Any help would be highly appreciated.
This is my code.
private void LockUnlockBitsExample(PaintEventArgs e)
{
// Create a new bitmap.
Bitmap bmp = new Bitmap("c:\\fakePhoto.jpg");
// Lock the bitmap's bits.
Rectangle rect = new Rectangle(0, 0, bmp.Width, bmp.Height);
System.Drawing.Imaging.BitmapData bmpData =
bmp.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadWrite,
bmp.PixelFormat);
// Get the address of the first line.
IntPtr ptr = bmpData.Scan0;
// Declare an array to hold the bytes of the bitmap.
int bytes = Math.Abs(bmpData.Stride) * bmp.Height;
byte[] rgbValues = new byte[bytes];
// Copy the RGB values into the array.
System.Runtime.InteropServices.Marshal.Copy(ptr, rgbValues, 0, bytes);
// Set every third value to 255. A 24bpp bitmap will look red.
for (int counter = 2; counter < rgbValues.Length; counter += 3)
rgbValues[counter] = 255;
// Copy the RGB values back to the bitmap
System.Runtime.InteropServices.Marshal.Copy(rgbValues, 0, ptr, bytes);
// Unlock the bits.
bmp.UnlockBits(bmpData);
// Draw the modified image.
e.Graphics.DrawImage(bmp, 0, 150);
}
Edit: The Bitmap contains solid color shapes, multiple shapes with same color can appear. I need to find the bounding rectangle of each shape.
Just like the paint fills color with bucket tool, I need the bounding rectangle of the filled area.
I can provide x, y coordinates of point on Bitmap to find the bound rectangle of color.
You would do this just like any other code where you want to find the min or max value in a list. With the difference that you want to find both min and max in both X and Y dimensions. Ex:
public static Rectangle GetBounds(this Bitmap bmp, Color color)
{
int minX = int.MaxValue;
int minY = int.MaxValue;
int maxX = int.MinValue;
int maxY = int.MinValue;
for (int y = 0; y < bmp.Height; y++)
{
for (int x = 0; x < bmp.Width; x++)
{
var c = bmp.GetPixel(x, y);
if (color == c)
{
if (x < minX) minX = x;
if (x > maxX) maxX = x;
if (y < minY) minY = y;
if (y > maxY) maxY = y;
}
}
}
var width = maxX - minX;
var height = maxY - minY;
if (width <= 0 || height <= 0)
{
// Handle case where no color was found, or if color is a single row/column
return default;
}
return new Rectangle(minX, minY, width, height);
}
There are plenty of resources on how to use LockBits/pointers. So converting the code to use this instead of GetPixel is left as an exercise.
If you are not concerned with the performance, and an exact color match is enough for you, then just scan the bitmap:
var l = bmp.Width; var t = bmp.Height; var r = 0; var b = 0;
for (var i = 0; i<rgbValues.Length, i++)
{
if(rgbValues[i] == 255) // rgb representation of red;
{
l = Math.Min(l, i % bmpData.Stride); r = Math.Max(r, i % bmpData.Stride);
t = Math.Min(l, i / bmpData.Stride); b = Math.Max(b, i / bmpData.Stride);
}
}
if(l>=r) // at least one point is found
return new Rectangle(l, t, r-l+1, b-t+1);
else
return new Rectangle(0, 0, 0, 0); // nothing found
You can search for the first point of each shape that fills a different area on the Bitmap, read a single horizontal row to get the points of the given color, then loop vertically within the horizontal range to get the adjacent points.
Once you get all the points of each, you can calculate the bounding rectangle through the first and last points.
public static IEnumerable<Rectangle> GetColorRectangles(Bitmap src, Color color)
{
var rects = new List<Rectangle>();
var points = new List<Point>();
var srcRec = new Rectangle(0, 0, src.Width, src.Height);
var srcData = src.LockBits(srcRec, ImageLockMode.ReadOnly, src.PixelFormat);
var srcBuff = new byte[srcData.Stride * srcData.Height];
var pixSize = Image.GetPixelFormatSize(src.PixelFormat) / 8;
Marshal.Copy(srcData.Scan0, srcBuff, 0, srcBuff.Length);
src.UnlockBits(srcData);
Rectangle GetColorRectangle()
{
var curX = points.First().X;
var curY = points.First().Y + 1;
var maxX = points.Max(p => p.X);
for(var y = curY; y < src.Height; y++)
for(var x = curX; x <= maxX; x++)
{
var pos = (y * srcData.Stride) + (x * pixSize);
var blue = srcBuff[pos];
var green = srcBuff[pos + 1];
var red = srcBuff[pos + 2];
if (Color.FromArgb(red, green, blue).ToArgb().Equals(color.ToArgb()))
points.Add(new Point(x, y));
else
break;
}
var p1 = points.First();
var p2 = points.Last();
return new Rectangle(p1.X, p1.Y, p2.X - p1.X, p2.Y - p1.Y);
}
for (var y = 0; y < src.Height; y++)
{
for (var x = 0; x < src.Width; x++)
{
var pos = (y * srcData.Stride) + (x * pixSize);
var blue = srcBuff[pos];
var green = srcBuff[pos + 1];
var red = srcBuff[pos + 2];
if (Color.FromArgb(red, green, blue).ToArgb().Equals(color.ToArgb()))
{
var p = new Point(x, y);
if (!rects.Any(r => new Rectangle(r.X - 2, r.Y - 2,
r.Width + 4, r.Height + 4).Contains(p)))
points.Add(p);
}
}
if (points.Any())
{
var rect = GetColorRectangle();
rects.Add(rect);
points.Clear();
}
}
return rects;
}
Demo
private IEnumerable<Rectangle> shapesRects = Enumerable.Empty<Rectangle>();
private void pictureBox1_MouseClick(object sender, MouseEventArgs e)
{
var sx = 1f * pictureBox1.Width / pictureBox1.ClientSize.Width;
var sy = 1f * pictureBox1.Height / pictureBox1.ClientSize.Height;
var p = Point.Round(new PointF(e.X * sx, e.Y * sy));
var c = (pictureBox1.Image as Bitmap).GetPixel(p.X, p.Y);
shapesRects = GetColorRectangles(pictureBox1.Image as Bitmap, c);
pictureBox1.Invalidate();
}
private void pictureBox1_Paint(object sender, PaintEventArgs e)
{
if (shapesRects.Any())
using (var pen = new Pen(Color.Black, 2))
e.Graphics.DrawRectangles(pen, shapesRects.ToArray());
}
I am attempting to extract the audio content of a wav file and export the resultant waveform as an image (bmp/jpg/png).
So I have found the following code which draws a sine wave and works as expected:
string filename = #"C:\0\test.bmp";
int width = 640;
int height = 480;
Bitmap b = new Bitmap(width, height);
for (int i = 0; i < width; i++)
{
int y = (int)((Math.Sin((double)i * 2.0 * Math.PI / width) + 1.0) * (height - 1) / 2.0);
b.SetPixel(i, y, Color.Black);
}
b.Save(filename);
This works completely as expected, what I would like to do is replace
int y = (int)((Math.Sin((double)i * 2.0 * Math.PI / width) + 1.0) * (height - 1) / 2.0);
with something like
int y = converted and scaled float from monoWaveFileFloatValues
So how would I best go about doing this in the simplest manner possible?
I have 2 basic issues I need to deal with (i think)
convert float to int in a way which does not loose information, this is due to SetPixel(i, y, Color.Black); where x & y are both int
sample skipping on the x axis so the waveform fits into the defined space audio length / image width give the number of samples to average out intensity over which would be represented by a single pixel
The other options is find another method of plotting the waveform which does not rely on the method noted above. Using a chart might be a good method, but I would like to be able to render the image directly if possible
This is all to be run from a console application and I have the audio data (minus the header) already in a float array.
UPDATE 1
The following code enabled me to draw the required output using System.Windows.Forms.DataVisualization.Charting but it took about 30 seconds to process 27776 samples and whilst it does do what I need, it is far too slow to be useful. So I am still looking towards a solution which will draw the bitmap directly.
System.Windows.Forms.DataVisualization.Charting.Chart chart = new System.Windows.Forms.DataVisualization.Charting.Chart();
chart.Size = new System.Drawing.Size(640, 320);
chart.ChartAreas.Add("ChartArea1");
chart.Legends.Add("legend1");
// Plot {sin(x), 0, 2pi}
chart.Series.Add("sin");
chart.Series["sin"].LegendText = args[0];
chart.Series["sin"].ChartType = System.Windows.Forms.DataVisualization.Charting.SeriesChartType.Spline;
//for (double x = 0; x < 2 * Math.PI; x += 0.01)
for (int x = 0; x < audioDataLength; x ++)
{
//chart.Series["sin"].Points.AddXY(x, Math.Sin(x));
chart.Series["sin"].Points.AddXY(x, leftChannel[x]);
}
// Save sin_0_2pi.png image file
chart.SaveImage(#"c:\tmp\example.png", System.Drawing.Imaging.ImageFormat.Png);
Output shown below:
So I managed to figure it out using a code sample found here, though I made some minor changes to the way I interact with it.
public static Bitmap DrawNormalizedAudio(List<float> data, Color foreColor, Color backColor, Size imageSize, string imageFilename)
{
Bitmap bmp = new Bitmap(imageSize.Width, imageSize.Height);
int BORDER_WIDTH = 0;
float width = bmp.Width - (2 * BORDER_WIDTH);
float height = bmp.Height - (2 * BORDER_WIDTH);
using (Graphics g = Graphics.FromImage(bmp))
{
g.Clear(backColor);
Pen pen = new Pen(foreColor);
float size = data.Count;
for (float iPixel = 0; iPixel < width; iPixel += 1)
{
// determine start and end points within WAV
int start = (int)(iPixel * (size / width));
int end = (int)((iPixel + 1) * (size / width));
if (end > data.Count)
end = data.Count;
float posAvg, negAvg;
averages(data, start, end, out posAvg, out negAvg);
float yMax = BORDER_WIDTH + height - ((posAvg + 1) * .5f * height);
float yMin = BORDER_WIDTH + height - ((negAvg + 1) * .5f * height);
g.DrawLine(pen, iPixel + BORDER_WIDTH, yMax, iPixel + BORDER_WIDTH, yMin);
}
}
bmp.Save(imageFilename);
bmp.Dispose();
return null;
}
private static void averages(List<float> data, int startIndex, int endIndex, out float posAvg, out float negAvg)
{
posAvg = 0.0f;
negAvg = 0.0f;
int posCount = 0, negCount = 0;
for (int i = startIndex; i < endIndex; i++)
{
if (data[i] > 0)
{
posCount++;
posAvg += data[i];
}
else
{
negCount++;
negAvg += data[i];
}
}
if (posCount > 0)
posAvg /= posCount;
if (negCount > 0)
negAvg /= negCount;
}
In order to get it working I had to do a couple of things prior to calling the method DrawNormalizedAudio you can see below what I needed to do:
Size imageSize = new Size();
imageSize.Width = 1000;
imageSize.Height = 500;
List<float> lst = leftChannel.OfType<float>().ToList(); //change float array to float list - see link below
DrawNormalizedAudio(lst, Color.Red, Color.White, imageSize, #"c:\tmp\example2.png");
* change float array to float list
The result of this is as follows, a waveform representation of a hand clap wav sample:
I am quite sure there needs to be some updates/revisions to the code, but it's a start and hopefully this will assist someone else who is trying to do the same thing I was.
If you can see any improvements that can be made, let me know.
UPDATES
NaN issue mentioned in the comments now resolved and code above updated.
Waveform Image updated to represent output fixed by removal of NaN values as noted in point 1.
UPDATE 1
Average level (not RMS) was determined by summing the max level for each sample point and dividing by the total number of samples. Examples of this can be seen below:
Silent Wav File:
Hand Clap Wav File:
Brownian, Pink & White Noise Wav File:
Here is a variation you may want to study. It scales the Graphics object so it can use the float data directly.
Note how I translate (i.e. move) the drawing area twice so I can do the drawing more conveniently!
It also uses the DrawLines method for drawing. The benefit in addition to speed is that the lines may be semi-transparent or thicker than one pixel without getting artifacts at the joints. You can see the center line shine through.
To do this I convert the float data to a List<PointF> using a little Linq magick.
I also make sure to put all GDI+ objects I create in using clause so they will get disposed of properly.
...
using System.Windows.Forms;
using System.IO;
using System.Drawing;
using System.Drawing.Imaging;
using System.Drawing.Drawing2D;
..
..
class Program
{
static void Main(string[] args)
{
float[] data = initData(10000);
Size imgSize = new Size(1000, 400);
Bitmap bmp = drawGraph(data, imgSize , Color.Green, Color.Black);
bmp.Save("D:\\wave.png", ImageFormat.Png);
}
static float[] initData(int count)
{
float[] data = new float[count];
for (int i = 0; i < count; i++ )
{
data[i] = (float) ((Math.Sin(i / 12f) * 880 + Math.Sin(i / 15f) * 440
+ Math.Sin(i / 66) * 110) / Math.Pow( (i+1), 0.33f));
}
return data;
}
static Bitmap drawGraph(float[] data, Size size, Color ForeColor, Color BackColor)
{
Bitmap bmp = new System.Drawing.Bitmap(size.Width, size.Height,
PixelFormat.Format32bppArgb);
Padding borders = new Padding(20, 20, 10, 50);
Rectangle plotArea = new Rectangle(borders.Left, borders.Top,
size.Width - borders.Left - borders.Right,
size.Height - borders.Top - borders.Bottom);
using (Graphics g = Graphics.FromImage(bmp))
using (Pen pen = new Pen(Color.FromArgb(224, ForeColor),1.75f))
{
g.SmoothingMode = SmoothingMode.AntiAlias;
g.Clear(Color.Silver);
using (SolidBrush brush = new SolidBrush(BackColor))
g.FillRectangle(brush, plotArea);
g.DrawRectangle(Pens.LightGoldenrodYellow, plotArea);
g.TranslateTransform(plotArea.Left, plotArea.Top);
g.DrawLine(Pens.White, 0, plotArea.Height / 2,
plotArea.Width, plotArea.Height / 2);
float dataHeight = Math.Max( data.Max(), - data.Min()) * 2;
float yScale = 1f * plotArea.Height / dataHeight;
float xScale = 1f * plotArea.Width / data.Length;
g.ScaleTransform(xScale, yScale);
g.TranslateTransform(0, dataHeight / 2);
var points = data.ToList().Select((y, x) => new { x, y })
.Select(p => new PointF(p.x, p.y)).ToList();
g.DrawLines(pen, points.ToArray());
g.ResetTransform();
g.DrawString(data.Length.ToString("###,###,###,##0") + " points plotted.",
new Font("Consolas", 14f), Brushes.Black,
plotArea.Left, plotArea.Bottom + 2f);
}
return bmp;
}
}
After using STFT(Short-time Fourier transform) the output is a matrix that represents a 3d plot as though (A[X, Y] = M) A is the output matrix, X is the time , Y is the frequency, and the third dimension M is the amplitude illustrated by the intensity of the pixel color as in the following pictures:
Spectrogram 2
How do I draw the output matrix A with a gradient of colors like in the pictures in C#? Is there a library that contains a spectrogram control for C#?
Update:
After some modifications on the given algorithm I could draw the spectrogram, I didn't change the color palette except the first color changed to black but I don't know why it's very faded!
This one represents a sound saying
Bye Bye
Bye Bye Spectrogram
And this one of a pure sine wave so it's almost the same frequency all the time
Pure sine wave Spectrogram
The output is accepted it represents the frequencies of the input signal as expected, but i think there is a way to make the spectrogram as well illustrated as the ones in the examples, could you please take a look at my code and suggest modifications?
This is the event handler:
private void SpectrogramButton_Click(object sender, EventArgs e)
{
Complex[][] SpectrogramData = Fourier_Transform.STFT(/*signal:*/ samples, /*windowSize:*/ 512, /*hopSize:*/ 512);
SpectrogramBox.Image = Spectrogram.DrawSpectrogram(SpectrogramData, /*Interpolation Factor:*/ 1000, /*Height:*/ 256);
}
And this one is the drawing function after my modifications:
public static Bitmap DrawSpectrogram(Complex[][] Data, int InterpolationFactor, int Height)
{
// target size:
Size sz = new Size(Data.GetLength(0), Height);
Bitmap bmp = new Bitmap(sz.Width, sz.Height);
// the data array:
//double[,] data = new double[222, 222];
// step sizes:
float stepX = 1f * sz.Width / Data.GetLength(0);
float stepY = 1f * sz.Height / Data[0].GetLength(0);
// create a few stop colors:
List<Color> baseColors = new List<Color>(); // create a color list
baseColors.Add(Color.Black);
baseColors.Add(Color.LightSkyBlue);
baseColors.Add(Color.LightGreen);
baseColors.Add(Color.Yellow);
baseColors.Add(Color.Orange);
baseColors.Add(Color.Red);
// and the interpolate a larger number of grdient colors:
List<Color> colors = interpolateColors(baseColors, InterpolationFactor);
// a few boring test data
//Random rnd = new Random(1);
//for (int x = 0; x < data.GetLength(0); x++)
// for (int y = 0; y < data.GetLength(1); y++)
// {
// //data[x, y] = rnd.Next((int)(300 + Math.Sin(x * y / 999) * 200)) +
// // rnd.Next(x + y + 111);
// data[x, y] = 0;
// }
// now draw the data:
float Max = Complex.Max(Data);
using (Graphics G = Graphics.FromImage(bmp))
for (int x = 0; x < Data.GetLength(0); x++)
for (int y = 0; y < Data[0].GetLength(0); y++)
{
int Val = (int)Math.Ceiling((Data[x][y].Magnitude / Max) * (InterpolationFactor - 1));
using (SolidBrush brush = new SolidBrush(colors[(int)Val]))
G.FillRectangle(brush, x * stepX, (Data[0].GetLength(0) - y) * stepY, stepX, stepY);
}
// and display the result
return bmp;
}
I don't really understand the log thing that you are talking about in your answers, I'm sorry for my little knowledge.
Update:
This is the output after adding taking log10 to the magnitudes (negative values neglected):
This one of "Bye bye" from before:
A Shotgun Blast:
A Music Box:
I Think this output is acceptable, it is different from the examples I brought in the beginning but I think it's better.
No, there is no out of the box control I know of. There may well be outside libraries you can buy, of course, but shhh, you can't ask on SO about them..
In theory you could use, or I guess I should rather say abuse a Chart control for this. But since DataPoints are rather expensive objects, or at least more expensive than they look, this seems not advisable.
Instead you can simply draw the graph into a Bitmap yourself.
Step one is to decide on a gradient of colors. See the interpolateColors function here for an example of this!
Then you would simply do a double loop over the data using floats for the step and pixel sizes and do a Graphics.FillRectangle there.
Here is a simple example using GDI+ to create a Bitmap and a Winforms PictureBox for display. It doesn't add any axes to the graphic and fills it completely.
It first creates a few sample data and a gradient wih 1000 colors. Then it draws into a Bitmap and displays the result:
private void button6_Click(object sender, EventArgs e)
{
// target size:
Size sz = pictureBox1.ClientSize;
Bitmap bmp = new Bitmap(sz.Width, sz.Height);
// the data array:
double[,] data = new double[222, 222];
// step sizes:
float stepX = 1f * sz.Width / data.GetLength(0);
float stepY = 1f * sz.Height / data.GetLength(1);
// create a few stop colors:
List<Color> baseColors = new List<Color>(); // create a color list
baseColors.Add(Color.RoyalBlue);
baseColors.Add(Color.LightSkyBlue);
baseColors.Add(Color.LightGreen);
baseColors.Add(Color.Yellow);
baseColors.Add(Color.Orange);
baseColors.Add(Color.Red);
// and the interpolate a larger number of grdient colors:
List<Color> colors = interpolateColors(baseColors, 1000);
// a few boring test data
Random rnd = new Random(1);
for (int x = 0; x < data.GetLength(0); x++)
for (int y = 0; y < data.GetLength(1); y++)
{
data[x, y] = rnd.Next( (int) (300 + Math.Sin(x * y / 999) * 200 )) +
rnd.Next( x + y + 111);
}
// now draw the data:
using (Graphics G = Graphics.FromImage(bmp))
for (int x = 0; x < data.GetLength(0); x++)
for (int y = 0; y < data.GetLength(1); y++)
{
using (SolidBrush brush = new SolidBrush(colors[(int)data[x, y]]))
G.FillRectangle(brush, x * stepX, y * stepY, stepX, stepY);
}
// and display the result
pictureBox1.Image = bmp;
}
Here is the function from the link:
List<Color> interpolateColors(List<Color> stopColors, int count)
{
SortedDictionary<float, Color> gradient = new SortedDictionary<float, Color>();
for (int i = 0; i < stopColors.Count; i++)
gradient.Add(1f * i / (stopColors.Count - 1), stopColors[i]);
List<Color> ColorList = new List<Color>();
using (Bitmap bmp = new Bitmap(count, 1))
using (Graphics G = Graphics.FromImage(bmp))
{
Rectangle bmpCRect = new Rectangle(Point.Empty, bmp.Size);
LinearGradientBrush br = new LinearGradientBrush
(bmpCRect, Color.Empty, Color.Empty, 0, false);
ColorBlend cb = new ColorBlend();
cb.Positions = new float[gradient.Count];
for (int i = 0; i < gradient.Count; i++)
cb.Positions[i] = gradient.ElementAt(i).Key;
cb.Colors = gradient.Values.ToArray();
br.InterpolationColors = cb;
G.FillRectangle(br, bmpCRect);
for (int i = 0; i < count; i++) ColorList.Add(bmp.GetPixel(i, 0));
br.Dispose();
}
return ColorList;
}
You would probably want to draw axes with labels etc. You can use Graphics.DrawString or TextRenderer.DrawText to do so. Just leave enough space around the drawing area!
I used the data values cast to int as direct pointers into the color table.
Depending on your data you will need to scale them down or even use a log conversion. The first of your images show a logarithmic scale going from 100 to 20k, the second looks linear going from 0 to 100.
If you show us your data structure we can give you further hints how to adapt the code to use it..
You can create a bitmap as per the other answer. It's also common to use a color lookup table to convert FFT log magnitude to the color to use for each pixel or small rectangle.
I'm trying to scan 2 images (32bppArgb format), identify when there is a difference and store the difference block's bounds in a list of rectangles.
Suppose these are the images:
second:
I want to get the different rectangle bounds (the opened directory window in our case).
This is what I've done:
private unsafe List<Rectangle> CodeImage(Bitmap bmp, Bitmap bmp2)
{
List<Rectangle> rec = new List<Rectangle>();
bmData = bmp.LockBits(new System.Drawing.Rectangle(0, 0, 1920, 1080), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp.PixelFormat);
bmData2 = bmp2.LockBits(new System.Drawing.Rectangle(0, 0, 1920, 1080), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp2.PixelFormat);
IntPtr scan0 = bmData.Scan0;
IntPtr scan02 = bmData2.Scan0;
int stride = bmData.Stride;
int stride2 = bmData2.Stride;
int nWidth = bmp.Width;
int nHeight = bmp.Height;
int minX = int.MaxValue;;
int minY = int.MaxValue;
int maxX = 0;
bool found = false;
for (int y = 0; y < nHeight; y++)
{
byte* p = (byte*)scan0.ToPointer();
p += y * stride;
byte* p2 = (byte*)scan02.ToPointer();
p2 += y * stride2;
for (int x = 0; x < nWidth; x++)
{
if (p[0] != p2[0] || p[1] != p2[1] || p[2] != p2[2] || p[3] != p2[3]) //found differences-began to store positions.
{
found = true;
if (x < minX)
minX = x;
if (x > maxX)
maxX = x;
if (y < minY)
minY = y;
}
else
{
if (found)
{
int height = getBlockHeight(stride, scan0, maxX, minY, scan02, stride2);
found = false;
Rectangle temp = new Rectangle(minX, minY, maxX - minX, height);
rec.Add(temp);
//x += minX;
y += height;
minX = int.MaxValue;
minY = int.MaxValue;
maxX = 0;
}
}
p += 4;
p2 += 4;
}
}
return rec;
}
public unsafe int getBlockHeight(int stride, IntPtr scan, int x, int y1, IntPtr scan02, int stride2) //a function to get an existing block height.
{
int height = 0;;
for (int y = y1; y < 1080; y++) //only for example- in our case its 1080 height.
{
byte* p = (byte*)scan.ToPointer();
p += (y * stride) + (x * 4); //set the pointer to a specific potential point.
byte* p2 = (byte*)scan02.ToPointer();
p2 += (y * stride2) + (x * 4); //set the pointer to a specific potential point.
if (p[0] != p2[0] || p[1] != p2[1] || p[2] != p2[2] || p[3] != p2[3]) //still change on the height in the increasing **y** of the block.
height++;
}
return height;
}
This is actually how I call the method:
Bitmap a = Image.FromFile(#"C:\Users\itapi\Desktop\1.png") as Bitmap;//generates a 32bppRgba bitmap;
Bitmap b = Image.FromFile(#"C:\Users\itapi\Desktop\2.png") as Bitmap;//
List<Rectangle> l1 = CodeImage(a, b);
int i = 0;
foreach (Rectangle rec in l1)
{
i++;
Bitmap tmp = b.Clone(rec, a.PixelFormat);
tmp.Save(i.ToString() + ".png");
}
But I'm not getting the exact rectangle.. I'm getting only half of that and sometimes even worse. I think something in the code's logic is wrong.
Code for #nico
private unsafe List<Rectangle> CodeImage(Bitmap bmp, Bitmap bmp2)
{
List<Rectangle> rec = new List<Rectangle>();
var bmData1 = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp.PixelFormat);
var bmData2 = bmp2.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp2.PixelFormat);
int bytesPerPixel = 3;
IntPtr scan01 = bmData1.Scan0;
IntPtr scan02 = bmData2.Scan0;
int stride1 = bmData1.Stride;
int stride2 = bmData2.Stride;
int nWidth = bmp.Width;
int nHeight = bmp.Height;
bool[] visited = new bool[nWidth * nHeight];
byte* base1 = (byte*)scan01.ToPointer();
byte* base2 = (byte*)scan02.ToPointer();
for (int y = 0; y < nHeight; y += 5)
{
byte* p1 = base1;
byte* p2 = base2;
for (int x = 0; x < nWidth; x += 5)
{
if (!ArePixelsEqual(p1, p2, bytesPerPixel) && !(visited[x + nWidth * y]))
{
// fill the different area
int minX = x;
int maxX = x;
int minY = y;
int maxY = y;
var pt = new Point(x, y);
Stack<Point> toBeProcessed = new Stack<Point> ();
visited[x + nWidth * y] = true;
toBeProcessed.Push(pt);
while (toBeProcessed.Count > 0)
{
var process = toBeProcessed.Pop();
var ptr1 = (byte*)scan01.ToPointer() + process.Y * stride1 + process.X * bytesPerPixel;
var ptr2 = (byte*) scan02.ToPointer() + process.Y * stride2 + process.X * bytesPerPixel;
//Check pixel equality
if (ArePixelsEqual(ptr1, ptr2, bytesPerPixel))
continue;
//This pixel is different
//Update the rectangle
if (process.X < minX) minX = process.X;
if (process.X > maxX) maxX = process.X;
if (process.Y < minY) minY = process.Y;
if (process.Y > maxY) maxY = process.Y;
Point n;
int idx;
//Put neighbors in stack
if (process.X - 1 >= 0)
{
n = new Point(process.X - 1, process.Y);
idx = n.X + nWidth * n.Y;
if (!visited[idx])
{
visited[idx] = true;
toBeProcessed.Push(n);
}
}
if (process.X + 1 < nWidth)
{
n = new Point(process.X + 1, process.Y);
idx = n.X + nWidth * n.Y;
if (!visited[idx])
{
visited[idx] = true;
toBeProcessed.Push(n);
}
}
if (process.Y - 1 >= 0)
{
n = new Point(process.X, process.Y - 1);
idx = n.X + nWidth * n.Y;
if (!visited[idx])
{
visited[idx] = true;
toBeProcessed.Push(n);
}
}
if (process.Y + 1 < nHeight)
{
n = new Point(process.X, process.Y + 1);
idx = n.X + nWidth * n.Y;
if (!visited[idx])
{
visited[idx] = true;
toBeProcessed.Push(n);
}
}
}
if (((maxX - minX + 1) > 5) & ((maxY - minY + 1) > 5))
rec.Add(new Rectangle(minX, minY, maxX - minX + 1, maxY - minY + 1));
}
p1 += 5 * bytesPerPixel;
p2 += 5 * bytesPerPixel;
}
base1 += 5 * stride1;
base2 += 5 * stride2;
}
bmp.UnlockBits(bmData1);
bmp2.UnlockBits(bmData2);
return rec;
}
I see a couple of problems with your code. If I understand it correctly, you
find a pixel that's different between the two images.
then you continue to scan from there to the right, until you find a position where both images are identical again.
then you scan from the last "different" pixel to the bottom, until you find a position where both images are identical again.
then you store that rectangle and start at the next line below it
Am I right so far?
Two obvious things can go wrong here:
If two rectangles have overlapping y-ranges, you're in trouble: You'll find the first rectangle fine, then skip to the bottom Y-coordinate, ignoring all the pixels left or right of the rectangle you just found.
Even if there is only one rectangle, you assume that every pixel on the rectangle's border is different, and all the other pixels are identical. If that assumption isn't valid, you'll stop searching too early, and only find parts of rectangles.
If your images come from a scanner or digital camera, or if they contain lossy compression (jpeg) artifacts, the second assumption will almost certainly be wrong. To illustrate this, here's what I get when I mark every identical pixel the two jpg images you linked black, and every different pixel white:
What you see is not a rectangle. Instead, a lot of pixels around the rectangles you're looking for are different:
That's because of jpeg compression artifacts. But even if you used lossless source images, pixels at the borders might not form perfect rectangles, because of antialiasing or because the background just happens to have a similar color in that region.
You could try to improve your algorithm, but if you look at that border, you will find all kinds of ugly counterexamples to any geometric assumptions you'll make.
It would probably be better to implement this "the right way". Meaning:
Either implement a flood fill algorithm that erases different pixels (e.g. by setting them to identical or by storing a flag in a separate mask), then recursively checks if the 4 neighbor pixels.
Or implement a connected component labeling algorithm, that marks each different pixel with a temporary integer label, using clever data structures to keep track which temporary labels are connected. If you're only interested in a bounding box, you don't even have to merge the temporary labels, just merge the bounding boxes of adjacent labeled areas.
Connected component labeling is in general a bit faster, but is a bit trickier to get right than flood fill.
One last advice: I would rethink your "no 3rd party libraries" policy if I were you. Even if your final product will contain no 3rd party libraries, development might by a lot faster if you used well-documented, well-tested, useful building blocks from a library, then replaced them one by one with your own code. (And who knows, you might even find an open source library with a suitable license that's so much faster than your own code that you'll stick with it in the end...)
ADD: In case you want to rethink your "no libraries" position: Here's a quick and simple implementation using AForge (which has a more permissive library than emgucv):
private static void ProcessImages()
{
(* load images *)
var img1 = AForge.Imaging.Image.FromFile(#"compare1.jpg");
var img2 = AForge.Imaging.Image.FromFile(#"compare2.jpg");
(* calculate absolute difference *)
var difference = new AForge.Imaging.Filters.ThresholdedDifference(15)
{OverlayImage = img1}
.Apply(img2);
(* create and initialize the blob counter *)
var bc = new AForge.Imaging.BlobCounter();
bc.FilterBlobs = true;
bc.MinWidth = 5;
bc.MinHeight = 5;
(* find blobs *)
bc.ProcessImage(difference);
(* draw result *)
BitmapData data = img2.LockBits(
new Rectangle(0, 0, img2.Width, img2.Height),
ImageLockMode.ReadWrite, img2.PixelFormat);
foreach (var rc in bc.GetObjectsRectangles())
AForge.Imaging.Drawing.FillRectangle(data, rc, Color.FromArgb(128,Color.Red));
img2.UnlockBits(data);
img2.Save(#"compareResult.jpg");
}
The actual difference + blob detection part (without loading and result display) takes about 43ms, for the second run (this first time takes longer of course, due to JITting, cache, etc.)
Result (the rectangle is larger due to jpeg artifacts):
Here is a flood-fill based version of your code. It checks every pixel for difference. If it finds a different pixel, it runs an exploration to find the entire different area.
The code is only meant as an illustration. There are certainly some points that could be improved.
unsafe bool ArePixelsEqual(byte* p1, byte* p2, int bytesPerPixel)
{
for (int i = 0; i < bytesPerPixel; ++i)
if (p1[i] != p2[i])
return false;
return true;
}
private static unsafe List<Rectangle> CodeImage(Bitmap bmp, Bitmap bmp2)
{
if (bmp.PixelFormat != bmp2.PixelFormat || bmp.Width != bmp2.Width || bmp.Height != bmp2.Height)
throw new ArgumentException();
List<Rectangle> rec = new List<Rectangle>();
var bmData1 = bmp.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp.PixelFormat);
var bmData2 = bmp2.LockBits(new System.Drawing.Rectangle(0, 0, bmp.Width, bmp.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, bmp2.PixelFormat);
int bytesPerPixel = Image.GetPixelFormatSize(bmp.PixelFormat) / 8;
IntPtr scan01 = bmData1.Scan0;
IntPtr scan02 = bmData2.Scan0;
int stride1 = bmData1.Stride;
int stride2 = bmData2.Stride;
int nWidth = bmp.Width;
int nHeight = bmp.Height;
bool[] visited = new bool[nWidth * nHeight];
byte* base1 = (byte*)scan01.ToPointer();
byte* base2 = (byte*)scan02.ToPointer();
for (int y = 0; y < nHeight; y++)
{
byte* p1 = base1;
byte* p2 = base2;
for (int x = 0; x < nWidth; ++x)
{
if (!ArePixelsEqual(p1, p2, bytesPerPixel) && !(visited[x + nWidth * y]))
{
// fill the different area
int minX = x;
int maxX = x;
int minY = y;
int maxY = y;
var pt = new Point(x, y);
Stack<Point> toBeProcessed = new Stack<Point>();
visited[x + nWidth * y] = true;
toBeProcessed.Push(pt);
while (toBeProcessed.Count > 0)
{
var process = toBeProcessed.Pop();
var ptr1 = (byte*)scan01.ToPointer() + process.Y * stride1 + process.X * bytesPerPixel;
var ptr2 = (byte*)scan02.ToPointer() + process.Y * stride2 + process.X * bytesPerPixel;
//Check pixel equality
if (ArePixelsEqual(ptr1, ptr2, bytesPerPixel))
continue;
//This pixel is different
//Update the rectangle
if (process.X < minX) minX = process.X;
if (process.X > maxX) maxX = process.X;
if (process.Y < minY) minY = process.Y;
if (process.Y > maxY) maxY = process.Y;
Point n; int idx;
//Put neighbors in stack
if (process.X - 1 >= 0)
{
n = new Point(process.X - 1, process.Y); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
if (process.X + 1 < nWidth)
{
n = new Point(process.X + 1, process.Y); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
if (process.Y - 1 >= 0)
{
n = new Point(process.X, process.Y - 1); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
if (process.Y + 1 < nHeight)
{
n = new Point(process.X, process.Y + 1); idx = n.X + nWidth * n.Y;
if (!visited[idx]) { visited[idx] = true; toBeProcessed.Push(n); }
}
}
rec.Add(new Rectangle(minX, minY, maxX - minX + 1, maxY - minY + 1));
}
p1 += bytesPerPixel;
p2 += bytesPerPixel;
}
base1 += stride1;
base2 += stride2;
}
bmp.UnlockBits(bmData1);
bmp2.UnlockBits(bmData2);
return rec;
}
You can achieve this easily using a flood fill segmentation algorithm.
First an utility class to make fast bitmap access easier. This will help to encapsulate the complex pointer-logic and make the code more readable:
class BitmapWithAccess
{
public Bitmap Bitmap { get; private set; }
public System.Drawing.Imaging.BitmapData BitmapData { get; private set; }
public BitmapWithAccess(Bitmap bitmap, System.Drawing.Imaging.ImageLockMode lockMode)
{
Bitmap = bitmap;
BitmapData = bitmap.LockBits(new Rectangle(Point.Empty, bitmap.Size), lockMode, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
}
public Color GetPixel(int x, int y)
{
unsafe
{
byte* dataPointer = MovePointer((byte*)BitmapData.Scan0, x, y);
return Color.FromArgb(dataPointer[3], dataPointer[2], dataPointer[1], dataPointer[0]);
}
}
public void SetPixel(int x, int y, Color color)
{
unsafe
{
byte* dataPointer = MovePointer((byte*)BitmapData.Scan0, x, y);
dataPointer[3] = color.A;
dataPointer[2] = color.R;
dataPointer[1] = color.G;
dataPointer[0] = color.B;
}
}
public void Release()
{
Bitmap.UnlockBits(BitmapData);
BitmapData = null;
}
private unsafe byte* MovePointer(byte* pointer, int x, int y)
{
return pointer + x * 4 + y * BitmapData.Stride;
}
}
Then a class representing a rectangle containing different pixels, to mark them in the resulting image. In general this class can also contain a list of Point instances (or a byte[,] map) to make indicating individual pixels in the resulting image possible:
class Segment
{
public int Left { get; set; }
public int Top { get; set; }
public int Right { get; set; }
public int Bottom { get; set; }
public Bitmap Bitmap { get; set; }
public Segment()
{
Left = int.MaxValue;
Right = int.MinValue;
Top = int.MaxValue;
Bottom = int.MinValue;
}
};
Then the steps of a simple algorithm are as follows:
find different pixels
use a flood-fill algorithm to find segments on the difference image
draw bounding rectangles for the segments found
The first step is the easiest one:
static Bitmap FindDifferentPixels(Bitmap i1, Bitmap i2)
{
var result = new Bitmap(i1.Width, i2.Height, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
var ia1 = new BitmapWithAccess(i1, System.Drawing.Imaging.ImageLockMode.ReadOnly);
var ia2 = new BitmapWithAccess(i2, System.Drawing.Imaging.ImageLockMode.ReadOnly);
var ra = new BitmapWithAccess(result, System.Drawing.Imaging.ImageLockMode.ReadWrite);
for (int x = 0; x < i1.Width; ++x)
for (int y = 0; y < i1.Height; ++y)
{
var different = ia1.GetPixel(x, y) != ia2.GetPixel(x, y);
ra.SetPixel(x, y, different ? Color.White : Color.FromArgb(0, 0, 0, 0));
}
ia1.Release();
ia2.Release();
ra.Release();
return result;
}
And the second and the third steps are covered with the following three functions:
static List<Segment> Segmentize(Bitmap blackAndWhite)
{
var bawa = new BitmapWithAccess(blackAndWhite, System.Drawing.Imaging.ImageLockMode.ReadOnly);
var result = new List<Segment>();
HashSet<Point> queue = new HashSet<Point>();
bool[,] visitedPoints = new bool[blackAndWhite.Width, blackAndWhite.Height];
for (int x = 0;x < blackAndWhite.Width;++x)
for (int y = 0;y < blackAndWhite.Height;++y)
{
if (bawa.GetPixel(x, y).A != 0
&& !visitedPoints[x, y])
{
result.Add(BuildSegment(new Point(x, y), bawa, visitedPoints));
}
}
bawa.Release();
return result;
}
static Segment BuildSegment(Point startingPoint, BitmapWithAccess bawa, bool[,] visitedPoints)
{
var result = new Segment();
List<Point> toProcess = new List<Point>();
toProcess.Add(startingPoint);
while (toProcess.Count > 0)
{
Point p = toProcess.First();
toProcess.RemoveAt(0);
ProcessPoint(result, p, bawa, toProcess, visitedPoints);
}
return result;
}
static void ProcessPoint(Segment segment, Point point, BitmapWithAccess bawa, List<Point> toProcess, bool[,] visitedPoints)
{
for (int i = -1; i <= 1; ++i)
{
for (int j = -1; j <= 1; ++j)
{
int x = point.X + i;
int y = point.Y + j;
if (x < 0 || y < 0 || x >= bawa.Bitmap.Width || y >= bawa.Bitmap.Height)
continue;
if (bawa.GetPixel(x, y).A != 0 && !visitedPoints[x, y])
{
segment.Left = Math.Min(segment.Left, x);
segment.Right = Math.Max(segment.Right, x);
segment.Top = Math.Min(segment.Top, y);
segment.Bottom = Math.Max(segment.Bottom, y);
toProcess.Add(new Point(x, y));
visitedPoints[x, y] = true;
}
}
}
}
And the following program given your two images as arguments:
static void Main(string[] args)
{
Image ai1 = Image.FromFile(args[0]);
Image ai2 = Image.FromFile(args[1]);
Bitmap i1 = new Bitmap(ai1.Width, ai1.Height, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
Bitmap i2 = new Bitmap(ai2.Width, ai2.Height, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
using (var g1 = Graphics.FromImage(i1))
using (var g2 = Graphics.FromImage(i2))
{
g1.DrawImage(ai1, Point.Empty);
g2.DrawImage(ai2, Point.Empty);
}
var difference = FindDifferentPixels(i1, i2);
var segments = Segmentize(difference);
using (var g1 = Graphics.FromImage(i1))
{
foreach (var segment in segments)
{
g1.DrawRectangle(Pens.Red, new Rectangle(segment.Left, segment.Top, segment.Right - segment.Left, segment.Bottom - segment.Top));
}
}
i1.Save("result.png");
Console.WriteLine("Done.");
Console.ReadKey();
}
produces the following result:
As you can see there are more differences between the given images. You can filter the resulting segments with regard to their size for example to drop the small artefacts. Also there is of course much work to do in terms of error checking, design and performance.
One idea is to proceed as follows:
1) Rescale images to a smaller size (downsample)
2) Run the above algorithm on smaller images
3) Run the above algorithm on original images, but restricting yourself only to rectangles found in step 2)
This can be of course extended to a multi-level hierarchical approach (using more different image sizes, increasing accuracy with each step).
Ah an algorithm challenge. Like! :-)
There are other answers here using f.ex. floodfill that will work just fine. I just noticed that you wanted something fast, so let me propose a different idea. Unlike the other people, I haven't tested it; it shouldn't be too hard and should be quite fast, but I simply don't have the time at the moment to test it myself. If you do, please share the results. Also, note that it's not a standard algorithm, so there are probably some bugs here and there in my explanation (and no patents).
My idea is derived from the idea of mean adaptive thresholding but with a lot of important differences. I cannot find the link from wikipedia anymore or my code, so I'll do this from the top of my mind. Basically you create a new (64-bit) buffer for both images and fill it with:
f(x,y) = colorvalue + f(x-1, y) + f(x, y-1) - f(x-1, y-1)
f(x,0) = colorvalue + f(x-1, 0)
f(0,y) = colorvalue + f(0, y-1)
The main trick is that you can calculate the sum value of a portion of the image fast, namely by:
g(x1,y1,x2,y2) = f(x2,y2)-f(x1-1,y2)-f(x2,y1-1)+f(x1-1,y1-1)
In other words, this will give the same result as:
result = 0;
for (x=x1; x<=x2; ++x)
for (y=y1; y<=y2; ++y)
result += f(x,y)
In our case this means that with only 4 integer operations this will get you some unique number of the block in question. I'd say that's pretty awesome.
Now, in our case, we don't really care about the average value; we just care about some sort-of unique number. If the image changes, it should change - simple as that. As for colorvalue, usually some gray scale number is used for thresholding - instead, we'll be using the complete 24-bit RGB value. Because there are only so few compares, we can simply scan until we find a block that doesn't match.
The basic algorithm that I propose works as follows:
for (y=0; y<height;++y)
for (x=0; x<width; ++x)
if (src[x,y] != dst[x,y])
if (!IntersectsWith(x, y, foundBlocks))
FindBlock(foundBlocks);
Now, IntersectsWith can be something like a quad tree of if there are only a few blocks, you can simply iterate through the blocks and check if they are within the bounds of the block. You can also update the x variable accordingly (I would). You can even balance things by re-building the buffer for f(x,y) if you have too many blocks (more precise: merge found blocks back from dst into src, then rebuild the buffer).
FindBlocks is where it gets interesting. Using the formula for g that's now pretty easy:
int x1 = x-1; int y1 = y-1; int x2 = x; int y2 = y;
while (changes)
{
while (g(srcimage,x1-1,y1,x1,y2) == g(dstimage,x1-1,y1,x1,y2)) { --x1; }
while (g(srcimage,x1,y1-1,x1,y2) == g(dstimage,x1,y1-1,x1,y2)) { --y1; }
while (g(srcimage,x1,y1,x1+1,y2) == g(dstimage,x1,y1,x1+1,y2)) { ++x1; }
while (g(srcimage,x1,y1,x1,y2+1) == g(dstimage,x1,y1,x1,y2+1)) { ++y1; }
}
That's it. Note that the complexity of the FindBlocks algorithm is O(x + y), which is pretty awesome for finding a 2D block IMO. :-)
As I said, let me know how it turns out.
Similar to many programs that take a tiled map, like that in the game Terraria, and turn the map into a single picture of the entire map, I am trying to do something similar. The problem is, my block textures are in a single large texture atlas and are referenced by index, and I am having trouble taking the color data from a single block and placing it into the correct place in the larger texture.
This is my code so far.
Getting the source from the index (this code works):
public static Rectangle GetSourceForIndex(int index, Texture2D tex)
{
int dim = tex.Width / TEXTURE_MAP_DIM;
int startx = index % TEXTURE_MAP_DIM;
int starty = index / TEXTURE_MAP_DIM;
return new Rectangle(startx * dim, starty * dim, dim, dim);
}
Getting the texture at the index (Where the problems start):
public static Texture2D GetTextureAtIndex(int index, Texture2D tex)
{
Rectangle source = GetSourceForIndex(index, tex);
Texture2D texture = new Texture2D(_device, source.Width, source.Height);
Color[] colors = new Color[tex.Width * tex.Height];
tex.GetData<Color>(colors);
Color[] colorData = new Color[source.Width * source.Height];
for (int x = 0; x < source.Width; x++)
{
for (int y = 0; y < source.Height; y++)
{
colorData[x + y * source.Width] = colors[x + source.X + (y + source.Y) * tex.Width];
}
}
texture.SetData<Color>(colorData);
return texture;
}
Putting the texture into the larger picture (this is completely wrong I'm sure):
private void doSave()
{
int texWidth = this._rWidth * Region.REGION_DIM * 16;
int texHeight = this._rHeight * Region.REGION_DIM * 16;
Texture2D picture = new Texture2D(Game.GraphicsDevice, texWidth, texHeight);
Color[] pictureData = new Color[picture.Width * picture.Height];
for (int blockX = 0; blockX < texWidth / 16; blockX++)
{
for (int blockY = 0; blockY < texHeight / 16; blockY++)
{
Block b = this.GetBlockAt(blockX, blockY);
Texture2D toCopy = TextureManager.GetTextureAtIndex(b.GetIndexBasedOnMetadata(b.GetMetadataForSurroundings(this, blockX, blockY)), b.GetTextureFile());
Color[] copyData = new Color[toCopy.Width * toCopy.Height];
Rectangle source = new Rectangle(blockX * 16, blockY * 16, 16, 16);
toCopy.GetData<Color>(copyData);
for (int x = 0; x < source.Width; x++)
{
for (int y = 0; y < source.Height; y++)
{
pictureData[x + source.X + (y + source.Y) * picture.Width] = copyData[x + y * source.Width];
}
}
}
}
picture.SetData<Color>(pictureData);
string fileName = "picture" + DateTime.Now.ToString(#"MM\-dd\-yyyy-h\-mm-tt");
FileStream stream = File.Open(this.GetSavePath() + #"Pictures\" + fileName, FileMode.OpenOrCreate);
picture.SaveAsPng(stream, picture.Width, picture.Height);
I can't find any good descriptions on how to properly convert between the texture and a one dimensional color array. It would be much easier if I knew how to easily and properly place a square of colors into a larger two dimensional texture.
TL;DR: How do you put a smaller Texture into a larger texture?
Create a RenderTarget2D the size of you largest texture and set it to active. Draw the large texture then draw the smaller one. Set the reference to the original texture to the RenderTarget2D you just drew to.
int texWidth = this._rWidth * Region.REGION_DIM * 16;
int texHeight = this._rHeight * Region.REGION_DIM * 16;
_renderTarget = new RenderTarget2D(GraphicsDevice, texWidth, texHeight);
GraphicsDevice.SetRenderTarget(_renderTarget);
GraphicsDevice.Clear(Color.Transparent);
_spriteBatch.Begin(SpriteSortMode.Immediate, BlendState.Opaque);
for (int blockX = 0; blockX < texWidth / 16; blockX++)
{
for (int blockY = 0; blockY < texHeight / 16; blockY++)
{
_spriteBatch.Draw(
TextureManager.GetTextureAtIndex(b.GetIndexBasedOnMetadata(b.GetMetadataForSurroundings(this, blockX, blockY)), b.GetTextureFile()),
new Rectangle(blockX * 16, blockY * 16, 16, 16),
Color.White);
}
}
_spriteBatch.End()
GraphicsDevice.SetRenderTarget(null);
var picture = _renderTarget;