Adjusting the physical dimensions of a Bitmap object - c#

I have a script that returns a heatmap based on a List of Color objects (they're RGB values derived from a Gradient component in a graphical "coding" software called Grasshopper), which looks like this:
Below is an excerpt of my C# heatmap-drawing method that returns a Bitmap.
private Bitmap DrawHeatmap(List<Color> colors, int U, int V){
colorHeatmapArray = new Color[colors.Count()];
for(int i = 0; i < colors.Count(); i++){
colorHeatmapArray[i] = colors[i];
}
// Create heatmap image.
Bitmap map = new Bitmap(U, V, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
int x = 0;
int y = 0;
for(int i = 0; i < colors.Count(); i++){
Color color = colorHeatmapArray[i];
map.SetPixel(x, y, color);
y++;
if (y >= map.Height){
y = 0;
x++;
}
if (x >= map.Width){
break;
}
}
return map;
}
The method I used to save the image is like this:
private void saveBMP(){
_heatmap.Save(Path); // Path is just a string declared somewhere
}
_heatmap is an instance variable, declared like this: private Bitmap _heatmap;, where I stored the Bitmap object, using the DrawHeatmap() method.
The way I displayed the image on the "canvas" of Grasshopper relies on some Grasshopper-specific method, specifically, this snippet
RectangleF rec = Component.Attributes.Bounds;
rec.X = rec.Right + 10;
rec.Height = Height;
rec.Width = Width;
canvas.Graphics.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.NearestNeighbor;
canvas.Graphics.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.Half;
canvas.Graphics.DrawImage(_heatmap, GH_Convert.ToRectangle(rec));
canvas.Graphics.InterpolationMode = System.Drawing.Drawing2D.InterpolationMode.HighQualityBicubic;
canvas.Graphics.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.Default;
canvas.Graphics.DrawRectangle(Pens.Black, GH_Convert.ToRectangle(rec));
However, when I save the Bitmap object, the result I get is a slightly taller version of what I have on the canvas, which looks like this:
Doesn't look very pretty does it?
My question is - on calling the saveBMP() method, is there a way to manipulate the Bitmap to adjust the dimensions so it looks remotely like what I have on the canvas?

Assuming the _heatmap is set from the output of the DrawHeatmap method, then its size should be being set at the point of initialisation in that method to U by V pixels. Once it's saved, verify from the saved file what the size of the output file (ie are it's dimensions as expected given the value of U and V that enter DrawHeatmap?
When you are drawing to the rectangle in the latter code section, are you using the same Height and Width values as earlier?

After some Googling it looks like I found a solution from this link
Specifically:

Related

Is there a way of extracting the index of a pixel in an indexed colour Bitmap (C#)?

I've loaded an indexed colour image (8bppI) with a unique palette into a C# program and I need to access the index of colours in that image. However, the only access function seems to be Bitmap.GetPixel(x,y) which returns a colour, not an index. When that same colour is inserted back into a Bitmap of the same format and palette, the colour information is apparently misinterpreted as an index and everything goes to heck. Here's a simplified version of the code for clarity of the issue:
public void CreateTerrainMap() {
visualization = new Bitmap(width, height, PixelFormat.Format8bppIndexed);
visualizationLock = new LockBitmap(visualization);
Lock();
// "TerrainIndex.bmp" is a 256x256 indexed colour image (8bpp) with a unique palette.
// Debugging confirms that this image loads with its palette and format intact
Bitmap terrainColours = new Bitmap("Resources\\TerrainIndex.bmp");
visualization.Palette = terrainColours.Palette;
Color c;
for (int x = 0; x < width; x++) {
for (int y = 0; y < height; y++) {
if (Terrain[x, y] < SeaLevel) {
c = Color.FromArgb(15); // Counterintuitively, this actually gives index 15 (represented as (0,0,0,15))
} else {
heatIndex = <some number between 0 and 255>;
rainIndex = <some number between 0 and 255>;
if (IsCoastal(x, y)) {
c = Color.FromArgb(35); // Counterintuitively, this actually gives index 35 (represented as (0,0,0,35))
} else {
// This returns an argb colour rather than an index...
c = terrainColours.GetPixel(rainIndex, heatIndex);
}
}
// ...and this seemingly decides that the blue value is actually an index and sets the wrong colour entirely
visualizationLock.SetPixel(x, y, c);
}
}
}
TerrainIndex looks like this:
TerrainIndex.bmp
The palette looks like this: Palette
The output should look like this: Good
But it looks like this instead: Bad
Note that the oceans (index 15) and coasts (index 35) look correct, but everything else is coming from the wrong part of the palette.
I can't find any useful information on working with indexed colour bitmaps in C#. I really hope someone can explain to me what I might be doing wrong, or point me in the right direction.
I created an answer from my comment. So the "native" solution is something like this (requires allowing unsafe code):
Bitmap visualization = new Bitmap(width, height, PixelFormat.Format8bppIndexed);
visualization.Palette = GetVisualizationPalette();
BitmapData visualizationData = visualization.LockBits(new Rectangle(Point.Empty, visualization.Size),
ImageLockMode.WriteOnly, PixelFormat.Format8bppIndexed);
try
{
unsafe
{
byte* row = (byte*)visualizationData.Scan0;
for (int y = 0; y < visualizationData.Height; y++)
{
for (int x = 0; x < visualizationData.Width; x++)
{
// here you set the 8bpp palette index directly
row[x] = GetHeatIndex(x, y);
}
row += visualizationData.Stride;
}
}
}
finally
{
visualization.UnlockBits(visualizationData);
}
Or, you can use these libraries, and then:
using KGySoft.Drawing;
using KGySoft.Drawing.Imaging;
// ...
Bitmap visualization = new Bitmap(width, height, PixelFormat.Format8bppIndexed);
visualization.Palette = GetVisualizationPalette();
using (IWritableBitmapData visualizationData = visualization.GetWritableBitmapData())
{
for (int y = 0; y < visualizationData.Height; y++)
{
IWritableBitmapDataRow row = visualizationData[y];
for (int x = 0; x < visualizationData.Width; x++)
{
// setting pixel by palette index
row.SetColorIndex(x, GetHeatIndex(x, y));
// or: directly by raw data (8bpp means 1 byte per pixel)
row.WriteRaw<byte>(x, GetHeatIndex(x, y));
// or: by color (automatically applies the closest palette index)
row.SetColor(x, GetHeatColor(x, y));
}
}
}
Edit:
And for reading pixels/indices you can use terrainColors.GetReadableBitmapData() so you will able to use rowTerrain.GetColorIndex(x) or rowTerrain.ReadRaw<byte>(x) in a very similar way.

Why is AForge.net giving a different output in case of FFT Auto-correlation?

See this related question.
I want to obtain the same outcome using AForge.net framework. The output should match the following:
The output seems to be not coming as expected:
Why is the output different in AForge.net?
.
Source Code
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
Bitmap image = (Bitmap)Bitmap.FromFile(#"StandardImage\\lena.png");
Bitmap conv = new Bitmap(image.Width, image.Height, image.PixelFormat);
ComplexImage cImage = ComplexImage.FromBitmap(image);
cImage.ForwardFourierTransform();
ComplexImage cKernel = ComplexImage.FromBitmap(image);
cImage.ForwardFourierTransform();
ComplexImage convOut = ComplexImage.FromBitmap(conv);
convOut.ForwardFourierTransform();
for (int y = 0; y < cImage.Height; y++)
{
for (int x = 0; x < cImage.Width; x++)
{
convOut.Data[x, y] = cImage.Data[x, y] * cKernel.Data[x, y];
}
}
convOut.BackwardFourierTransform();
Bitmap bbbb = convOut.ToBitmap();
pictureBox1.Image = bbbb;
}
}
The main problem is
ComplexImage cKernel = ComplexImage.FromBitmap(image);
//cImage.ForwardFourierTransform(); //<--- This line should be FFT of cKernel
cKernel.ForwardFourierTransform();
This would solve the problem you mentioned in the resulting image, but if you want to get an image similar to the bottom right image you need to do some normalization to increase the intensity of pixels.
update:
The bottom right image is actually a Fourier image so I think we should remove the BFF.
//convOut.BackwardFourierTransform();
It is seems like you are not using a gaussian kernel with Aforge,
Anyway, the library has a method for convolution with gaussian:
int w=4,h=11;
GaussianBlur filter = new GaussianBlur( w, h );
// apply the filter
filter.ApplyInPlace( image );
Try it and the output should be the same as the others.

Plotting 2D heat map

I have a chart on which I want to plot a heat map; the only data I have is humidity and temperature, which represent a point in the chart.
How do I get the rectangular type of heat map on the chart in c#?
What I want is similar to picture below :
What I really want is a rectangular region in the chart which is plotted in different color based on the point that i get from the list of points and form the colorful section in the chart.
You have a choice of at least three ways to create a chart with colored rectangles that make up a heat map.
Here is one example
that uses/abuses a DataGridView. While I would not suggest this, the post contains a useful function that creates nice color lists to use in your task.
Then there is the option to draw the chart using GDI+ methods, namely Graphics.FillRectangle. This not hard at all but once you want to get those nice extras a Chart control offers, like scaling, axes, tooltips etc the work adds up.. See below!
So let's have a look at option three: Using the Chart control from the DataVisualization namespace.
Let's first assume that you have created a list of colors:
List<Color> colorList = new List<Color>();
And that you have managed to project your data onto a 2D array of int indices that point into the color list:
int[,] coloredData = null;
Next you have to pick a ChartType for your Series S1 There really is only one I can think of that will help:
S1.ChartType = SeriesChartType.Point;
Points are displayed by Markers. We want the DataPoints not really displayed as one of the standard MarkerTypes.
Square would be ok, if we wanted to display squares; but for rectangles it will not work well: Even if we let them overlap there will still be points at the borders that have a different size because they don't fully overlap..
So we use a custom marker by setting the MarkerImage of each point to a bitmap of a suitable size and color.
Here is a loop that adds the DataPoints to our Series and sets each to have a MarkerImage:
for (int x = 1; x < coloredData.GetLength(0); x++)
for (int y = 1; y < coloredData.GetLength(1); y++)
{
int pt = S1.Points.AddXY(x, y);
S1.Points[pt].MarkerImage = "NI" + coloredData[x,y];
}
This takes some explaining: To set a MarkerImage that is not at a path on the disk, it has to reside in the Chart's Images collection. This means is needs to be of type NamedImage. Any image will do, but it has to have a unique name string added to identify it in the NamedImagesCollection . I chose the names to be 'NI1', 'NI2'..
Obviously we need to create all those images; here is a function to do that:
void createMarkers(Chart chart, int count)
{
// rough calculation:
int sw = chart.ClientSize.Width / coloredData.GetLength(0);
int sh = chart.ClientSize.Height / coloredData.GetLength(1);
// clean up previous images:
foreach(NamedImage ni in chart1.Images) ni.Dispose();
chart.Images.Clear();
// now create count images:
for (int i = 0; i < count; i++)
{
Bitmap bmp = new Bitmap(sw, sh);
using (Graphics G = Graphics.FromImage(bmp))
G.Clear(colorList[i]);
chart.Images.Add(new NamedImage("NI" + i, bmp));
}
}
We want all markers to have at least roughly the right size; so whenever that size changes we set it again:
void setMarkerSize(Chart chart)
{
int sx = chart1.ClientSize.Width / coloredData.GetLength(0);
int sy = chart1.ClientSize.Height / coloredData.GetLength(1);
chart1.Series["S1"].MarkerSize = (int)Math.Max(sx, sy);
}
This doesn't care much about details like the InnerPlotPosition, i.e. the actual area to draw to; so here is some room for refinement..!
We call this when we set up the chart but also upon resizing:
private void chart1_Resize(object sender, EventArgs e)
{
setMarkerSize(chart1);
createMarkers(chart1, 100);
}
Let's have a look at the result using some cheap testdata:
As you can see resizing works ok..
Here is the full code that set up my example:
private void button6_Click(object sender, EventArgs e)
{
List<Color> stopColors = new List<Color>()
{ Color.Blue, Color.Cyan, Color.YellowGreen, Color.Orange, Color.Red };
colorList = interpolateColors(stopColors, 100);
coloredData = getCData(32, 24);
// basic setup..
chart1.ChartAreas.Clear();
ChartArea CA = chart1.ChartAreas.Add("CA");
chart1.Series.Clear();
Series S1 = chart1.Series.Add("S1");
chart1.Legends.Clear();
// we choose a charttype that lets us add points freely:
S1.ChartType = SeriesChartType.Point;
Size sz = chart1.ClientSize;
// we need to make the markers large enough to fill the area completely:
setMarkerSize(chart1);
createMarkers(chart1, 100);
// now we fill in the datapoints
for (int x = 1; x < coloredData.GetLength(0); x++)
for (int y = 1; y < coloredData.GetLength(1); y++)
{
int pt = S1.Points.AddXY(x, y);
// S1.Points[pt].Color = coloredData[x, y];
S1.Points[pt].MarkerImage = "NI" + coloredData[x,y];
}
}
A few notes on limitations:
The point will always sit on top of any gridlines. If you really needs those you will have to draw them on top in one of the the Paint events.
The labels as shown are referring to the integers indices of the data array. If you want to show the original data, one way would be to add CustomLabels to the axes.. See here for an example!
This should give you an idea of what you can do with a Chart control; to complete your confusion here is how to draw those rectangles in GDI+ using the same colors and data:
Bitmap getChartImg(float[,] data, Size sz, Padding pad)
{
Bitmap bmp = new Bitmap(sz.Width , sz.Height);
using (Graphics G = Graphics.FromImage(bmp))
{
float w = 1f * (sz.Width - pad.Left - pad.Right) / coloredData.GetLength(0);
float h = 1f * (sz.Height - pad.Top - pad.Bottom) / coloredData.GetLength(1);
for (int x = 0; x < coloredData.GetLength(0); x++)
for (int y = 0; y < coloredData.GetLength(1); y++)
{
using (SolidBrush brush = new SolidBrush(colorList[coloredData[x,y]]))
G.FillRectangle(brush, pad.Left + x * w, y * h - pad.Bottom, w, h);
}
}
return bmp;
}
The resulting Bitmap looks familiar:
That was simple; but to add all the extras into the space reserved by the padding will not be so easy..

Dynamically creating a Texture2D

I wanted to create a semi-transparent overlay for my screen and decided to dynamically create a custom Texture2D object using the following code:
const int TEX_WIDTH = 640;
const int TEX_HEIGHT = 480
Texture2D redScreen;
void GenerateTextures(GraphicsDevice device)
{
redScreen = new Texture2D(device, TEX_WIDTH, TEX_HEIGHT);
uint[] red = new uint[TEX_WIDTH * TEX_HEIGHT];
for (int i = 0; i < TEX_WIDTH * TEX_HEIGHT; i++)
red[i] = 0x1A0000ff;
redScreen.SetData<uint>(red);
}
And it just doesn't seem to work as expected! Looking at this code, I would expect the alpha value to be about 10%. (0x1A / 0xFF = ~10)
but it ends up being much more than that.
It seems to me that the uint represents an ARGB value, but the transparency value is never what I set it to be. it's either "somewhat transparent" or not transparent at all.
I don't like asking vague questions,
But what am I doing wrong?
what's wrong with this code snippet?
Edit:
In the end, I could only get the wanted results by setting BlendState.NonPremultiplied in the spriteBatch.Begin() call.
XNA by default uses pre-multiplied alpha so you have to multiply all of the color values by the alpha value. Also there is a color struct that you might find convenient. So I suggest the below. Alpha should be between 0 and 1 inclusive.
const int TEX_WIDTH = 640;
const int TEX_HEIGHT = 480
Texture2D redScreen;
void GenerateTextures(GraphicsDevice device)
{
redScreen = new Texture2D(device, TEX_WIDTH, TEX_HEIGHT);
uint[] red = new uint[TEX_WIDTH * TEX_HEIGHT];
for (int i = 0; i < TEX_WIDTH * TEX_HEIGHT; i++)
red[i] = new Color(255, 0, 0) * Alpha;
redScreen.SetData<uint>(red);
}
I don't see you specifying a surface/pixel format. Are you sure each pixel is a uint?
To be sure, create a texture with a specified layout and then calculate the value in it for a given R, G, B and A.

Algorithm for finding a painted region on a canvas

Update: I am attempting to pull a little clutter out of this post and sum it up more concisely. Please see the original edit if needed.
I am currently attempting to trace a series of single colored blobs on a Bitmap canvas.
e.g. An example of the bitmap I am attempting to trace would look like the following:
alt text http://www.refuctored.com/polygons.bmp
After successfully tracing the outlines of the 3 blobs on the image, I would have a class that held the color of a blob tied to a point list representing the outline of the blob (not all the pixels inside of the blobs).
The problem I am running into is logic in instances where a neighboring pixel has no surrounding pixels other than the previous pixel.
e.g The top example would trace fine, but the second would fail because the pixel has no where to go since the previous pixels have already been used.
alt text http://www.refuctored.com/error.jpg
I am tracing left-to-right, top-to-bottom, favoring diagonal angles over right angles. I must be able to redraw an exact copy of the region based off the data I extract, so the pixels in the list must be in the right order for the copy to work.
Thus far, my attempt has been riddled with failure, and a couple days of pulling my hair out trying to rewrite the algorithms a little different each time to solve the issue. Thus far I have been unsuccessful. Has anyone else had a similar issue like mine who has a good algorithm to find the edges?
One simple trick to avoiding these cul-de-sacs is to double the size of the image you want to trace using a nearest neighbor scaling algorithm before tracing it. Like that you will never get single strips.
The alternative is to use a marching squares algorithm - but it seems to still have one or two cases where it fails: http://www.sakri.net/blog/2009/05/28/detecting-edge-pixels-with-marching-squares-algorithm/
Have you looked at blob detection algorithms? For example, http://opencv.willowgarage.com/wiki/cvBlobsLib if you can integrate OpenCV into your application. Coupled with thresholding to create binary images for each color (or color range) in your image, you could easily find the blobs that are the same color. Repeat for each color in the image, and you have a list of blobs sorted by color.
If you cannot use OpenCV directly, perhaps the paper referenced by that library ("A linear-time component labeling algorithm using contour tracing technique", F.Chang et al.) would provide a good method for finding blobs.
Rather than using recursion, use a stack.
Pseudo-code:
Add initial pixel to polygon
Add initial pixel to stack
while(stack is not empty) {
pop pixel off the stack
foreach (neighbor n of popped pixel) {
if (n is close enough in color to initial pixel) {
Add n to polygon
Add n to stack
}
}
}
This will use a lot less memory than the same solution using recursion.
Just send your 'Image' to BuildPixelArray function and then call the FindRegions.
After that the 'colors' variable will be holding your colors list and pixel coordinates in every list member.
I've copied the source from one of my projects, there may be some undefined variables or syntax erors.
public class ImageProcessing{
private int[,] pixelArray;
private int imageWidth;
private int imageHeight;
List<MyColor> colors;
public void BuildPixelArray(ref Image myImage)
{
imageHeight = myImage.Height;
imageWidth = myImage.Width;
pixelArray = new int[imageWidth, imageHeight];
Rectangle rect = new Rectangle(0, 0, myImage.Width, myImage.Height);
Bitmap temp = new Bitmap(myImage);
BitmapData bmpData = temp.LockBits(rect, ImageLockMode.ReadWrite, PixelFormat.Format24bppRgb);
int remain = bmpData.Stride - bmpData.Width * 3;
unsafe
{
byte* ptr = (byte*)bmpData.Scan0;
for (int j = 15; j < bmpData.Height; j++)
{
for (int i = 0; i < bmpData.Width; i++)
{
pixelArray[i, j] = ptr[0] + ptr[1] * 256 + ptr[2] * 256 * 256;
ptr += 3;
}
ptr += remain;
}
}
temp.UnlockBits(bmpData);
}
public void FindRegions()
{
colors = new List<MyColor>();
for (int i = 0; i < imageWidth; i++)
{
for (int j = 0; j < imageHeight; j++)
{
int tmpColorValue = pixelArray[i, j];
MyColor tmp = new MyColor(tmpColorValue);
if (colors.Contains(tmp))
{
MyColor tmpColor = (from p in colors
where p.colorValue == tmpColorValue
select p).First();
tmpColor.pointList.Add(new MyPoint(i, j));
}
else
{
tmp.pointList.Add(new MyPoint(i, j));
colors.Add(tmp);
}
}
}
}
}
public class MyColor : IEquatable<MyColor>
{
public int colorValue { get; set; }
public List<MyPoint> pointList = new List<MyPoint>();
public MyColor(int _colorValue)
{
colorValue = _colorValue;
}
public bool Equals(MyColor other)
{
if (this.colorValue == other.colorValue)
{
return true;
}
return false;
}
}
public class MyPoint
{
public int xCoord { get; set; }
public int yCoord { get; set; }
public MyPoint(int _xCoord, int _yCoord)
{
xCoord = _xCoord;
yCoord = _yCoord;
}
}
If you're getting a stack overflow I would guess that you're not excluding already-checked pixels. The first check on visiting a square should be whether you've been here before.
Also, I was working on a related problem not too long ago and I came up with a different approach that uses a lot less memory:
A queue:
AddPointToQueue(x, y);
repeat
x, y = HeadItem;
AddMaybe(x - 1, y); x + 1, y; x, y - 1; x, y + 1;
until QueueIsEmpty;
AddMaybe(x, y):
if Visited[x, y] return;
Visited[x, y] = true;
AddPointToQueue(x, y);
The point of this approach is that you end up with your queue basically holding a line wrapped around the mapped area. This limits memory usage better than a stack can.
If relevant it also can be trivially modified to yield the travel distance to any square.
Try using AForge.net. I would go for Filter by colors, Threshold and then you could do some Morphology to decrement the black/White zones to lose contact between the objects. Then you could go for the Blobs.

Categories

Resources