Cell-Based Liquid Simulation: Local pressure model? - c#

I'm attempting to add semi-realistic water into my tile-based, 2D platformer. The water must act somewhat lifelike, with a pressure model that runs entirely local. (IE. Can only use data from cells near it) This model is needed because of the nature of my game, where you cannot be certain that the data you need isn't inside an area that isn't in memory.
I've tried one method so far, but I could not refine it enough to work with my constraints.
For that model, each cell would be slightly compressible, depending on the amount of water in the above cell. When a cell's water content was larger than the normal capacity, the cell would try to expand upwards. This created a fairly nice simulation, abeit slow (Not lag; Changes in the water were taking a while to propagate.), at times. When I tried to implement this into my engine, I found that my limitations lacked the precision required for it to work. I can provide a more indepth explanation or a link to the original concept if you wish.
My constraints:
Only 256 discrete values for water level. (No floating point variables :( ) -- EDIT. Floats are fine.
Fixed grid size.
2D Only.
U-Bend Configurations must work.
The language that I'm using is C#, but I can probably take other languages and translate it to C#.
The question is, can anyone give me a pressure model for water, following my constraints as closely as possible?

How about a different approach?
Forget about floats, that's asking for roundoff problems in the long run. Instead, how about a unit of water?
Each cell contains a certain number of units of water. Each iteration you compare the cell with it's 4 neighbors and move say 10% (change this to alter the propagation speed) of the difference in the number of units of water. A mapping function translates the units of water into a water level.
To avoid calculation order problems use two values, one for the old units, one for the new. Calculate everything and then copy the updated values back. 2 ints = 8 bytes per cell. If you have a million cells that's still only 8mb.
If you are actually trying to simulate waves you'll need to also store the flow--4 values, 16 mb. To make a wave put some inertia to the flow--after you calculate the desired flow then move the previous flow say 10% of the way towards the desired value.

Try treating each contiguous area of water as a single area (like flood fill) and track 1) the lowest cell(s) where water can escape and 2) the highest cell(s) from which water can come, then move water from the top to the bottom. This isn't local, but I think you can treat the edges of the area you want to affect as not connected and process any subset that you want. Re-evaluate what areas are contiguous on each frame (re-flood on each frame) so that when blobs converge, they can start being treated as one.
Here's my code from a Windows Forms demo of the idea. It may need some fine tuning, but seems to work quite well in my tests:
public partial class Form1 : Form
{
byte[,] tiles;
const int rows = 50;
const int cols = 50;
public Form1()
{
SetStyle(ControlStyles.ResizeRedraw, true);
InitializeComponent();
tiles = new byte[cols, rows];
for (int i = 0; i < 10; i++)
{
tiles[20, i+20] = 1;
tiles[23, i+20] = 1;
tiles[32, i+20] = 1;
tiles[35, i+20] = 1;
tiles[i + 23, 30] = 1;
tiles[i + 23, 32] = 1;
tiles[21, i + 15] = 2;
tiles[21, i + 4] = 2;
if (i % 2 == 0) tiles[22, i] = 2;
}
tiles[20, 30] = 1;
tiles[20, 31] = 1;
tiles[20, 32] = 1;
tiles[21, 32] = 1;
tiles[22, 32] = 1;
tiles[33, 32] = 1;
tiles[34, 32] = 1;
tiles[35, 32] = 1;
tiles[35, 31] = 1;
tiles[35, 30] = 1;
}
protected override void OnPaint(PaintEventArgs e)
{
base.OnPaint(e);
using (SolidBrush b = new SolidBrush(Color.White))
{
for (int y = 0; y < rows; y++)
{
for (int x = 0; x < cols; x++)
{
switch (tiles[x, y])
{
case 0:
b.Color = Color.White;
break;
case 1:
b.Color = Color.Black;
break;
default:
b.Color = Color.Blue;
break;
}
e.Graphics.FillRectangle(b, x * ClientSize.Width / cols, y * ClientSize.Height / rows,
ClientSize.Width / cols + 1, ClientSize.Height / rows + 1);
}
}
}
}
private bool IsLiquid(int x, int y)
{
return tiles[x, y] > 1;
}
private bool IsSolid(int x, int y)
{
return tiles[x, y] == 1;
}
private bool IsEmpty(int x, int y)
{
return IsEmpty(tiles, x, y);
}
public static bool IsEmpty(byte[,] tiles, int x, int y)
{
return tiles[x, y] == 0;
}
private void ProcessTiles()
{
byte processedValue = 0xFF;
byte unprocessedValue = 0xFF;
for (int y = 0; y < rows; y ++)
for (int x = 0; x < cols; x++)
{
if (IsLiquid(x, y))
{
if (processedValue == 0xff)
{
unprocessedValue = tiles[x, y];
processedValue = (byte)(5 - tiles[x, y]);
}
if (tiles[x, y] == unprocessedValue)
{
BlobInfo blob = GetWaterAt(new Point(x, y), unprocessedValue, processedValue, new Rectangle(0, 0, 50, 50));
blob.ProcessMovement(tiles);
}
}
}
}
class BlobInfo
{
private int minY;
private int maxEscapeY;
private List<int> TopXes = new List<int>();
private List<int> BottomEscapeXes = new List<int>();
public BlobInfo(int x, int y)
{
minY = y;
maxEscapeY = -1;
TopXes.Add(x);
}
public void NoteEscapePoint(int x, int y)
{
if (maxEscapeY < 0)
{
maxEscapeY = y;
BottomEscapeXes.Clear();
}
else if (y < maxEscapeY)
return;
else if (y > maxEscapeY)
{
maxEscapeY = y;
BottomEscapeXes.Clear();
}
BottomEscapeXes.Add(x);
}
public void NoteLiquidPoint(int x, int y)
{
if (y < minY)
{
minY = y;
TopXes.Clear();
}
else if (y > minY)
return;
TopXes.Add(x);
}
public void ProcessMovement(byte[,] tiles)
{
int min = TopXes.Count < BottomEscapeXes.Count ? TopXes.Count : BottomEscapeXes.Count;
for (int i = 0; i < min; i++)
{
if (IsEmpty(tiles, BottomEscapeXes[i], maxEscapeY) && (maxEscapeY > minY))
{
tiles[BottomEscapeXes[i], maxEscapeY] = tiles[TopXes[i], minY];
tiles[TopXes[i], minY] = 0;
}
}
}
}
private BlobInfo GetWaterAt(Point start, byte unprocessedValue, byte processedValue, Rectangle bounds)
{
Stack<Point> toFill = new Stack<Point>();
BlobInfo result = new BlobInfo(start.X, start.Y);
toFill.Push(start);
do
{
Point cur = toFill.Pop();
while ((cur.X > bounds.X) && (tiles[cur.X - 1, cur.Y] == unprocessedValue))
cur.X--;
if ((cur.X > bounds.X) && IsEmpty(cur.X - 1, cur.Y))
result.NoteEscapePoint(cur.X - 1, cur.Y);
bool pushedAbove = false;
bool pushedBelow = false;
for (; ((cur.X < bounds.X + bounds.Width) && tiles[cur.X, cur.Y] == unprocessedValue); cur.X++)
{
result.NoteLiquidPoint(cur.X, cur.Y);
tiles[cur.X, cur.Y] = processedValue;
if (cur.Y > bounds.Y)
{
if (IsEmpty(cur.X, cur.Y - 1))
{
result.NoteEscapePoint(cur.X, cur.Y - 1);
}
if ((tiles[cur.X, cur.Y - 1] == unprocessedValue) && !pushedAbove)
{
pushedAbove = true;
toFill.Push(new Point(cur.X, cur.Y - 1));
}
if (tiles[cur.X, cur.Y - 1] != unprocessedValue)
pushedAbove = false;
}
if (cur.Y < bounds.Y + bounds.Height - 1)
{
if (IsEmpty(cur.X, cur.Y + 1))
{
result.NoteEscapePoint(cur.X, cur.Y + 1);
}
if ((tiles[cur.X, cur.Y + 1] == unprocessedValue) && !pushedBelow)
{
pushedBelow = true;
toFill.Push(new Point(cur.X, cur.Y + 1));
}
if (tiles[cur.X, cur.Y + 1] != unprocessedValue)
pushedBelow = false;
}
}
if ((cur.X < bounds.X + bounds.Width) && (IsEmpty(cur.X, cur.Y)))
{
result.NoteEscapePoint(cur.X, cur.Y);
}
} while (toFill.Count > 0);
return result;
}
private void timer1_Tick(object sender, EventArgs e)
{
ProcessTiles();
Invalidate();
}
private void Form1_MouseMove(object sender, MouseEventArgs e)
{
if (e.Button == MouseButtons.Left)
{
int x = e.X * cols / ClientSize.Width;
int y = e.Y * rows / ClientSize.Height;
if ((x >= 0) && (x < cols) && (y >= 0) && (y < rows))
tiles[x, y] = 2;
}
}
}

From a fluid dynamics viewpoint, a reasonably popular lattice-based algorithm family is the so-called Lattice Boltzmann method. A simple implementation, ignoring all the fine detail that makes academics happy, should be relatively simple and fast and also get reasonably correct dynamics.

Related

Adaptive Thresholding Technique to live videos from webcam C#

I am trying to detect light from 2 LED lights (red and blue) I did that using Bernsen thresholding technique. However, I applied that to an image. Now I want to apply that same technique but to a live video from my webcam. Is there anyway I could simply edit the code for this technique on the image to make it work on a video from the webcam? I will add below the code I used for this thresholding technique.
private ArrayList getNeighbours(int xPos, int yPos, Bitmap bitmap)
{
//This goes around the image in windows of 5
ArrayList neighboursList = new ArrayList();
int xStart, yStart, xFinish, yFinish;
int pixel;
xStart = xPos - 5;
yStart = yPos - 5;
xFinish = xPos + 5;
yFinish = yPos + 5;
for (int y = yStart; y <= yFinish; y++)
{
for (int x = xStart; x <= xFinish; x++)
{
if (x < 0 || y < 0 || x > (bitmap.Width - 1) || y > (bitmap.Height - 1))
{
continue;
}
else
{
pixel = bitmap.GetPixel(x, y).R;
neighboursList.Add(pixel);
}
}
}
return neighboursList;
}
private void button5_Click_1(object sender, EventArgs e)
{
//The input image
Bitmap image = new Bitmap(pictureBox2.Image);
progressBar1.Minimum = 0;
progressBar1.Maximum = image.Height - 1;
progressBar1.Value = 0;
Bitmap result = new Bitmap(pictureBox2.Image);
int iMin, iMax, t, c, contrastThreshold, pixel;
contrastThreshold = 180;
ArrayList list = new ArrayList();
for (int y = 0; y < image.Height; y++)
{
for (int x = 0; x < image.Width; x++)
{
list.Clear();
pixel = image.GetPixel(x, y).R;
list = getNeighbours(x, y, image);
list.Sort();
iMin = Convert.ToByte(list[0]);
iMax = Convert.ToByte(list[list.Count - 1]);
// These are the calculations to test whether the
current pixel is light or dark
t = ((iMax + iMin) / 2);
c = (iMax - iMin);
if (c < contrastThreshold)
{
pixel = ((t >= 160) ? 0 : 255);
}
else
{
pixel = ((pixel >= t) ? 0 : 255);
}
result.SetPixel(x, y, Color.FromArgb(pixel, pixel, pixel));
}
progressBar1.Value = y;
}
pictureBox3.Image =result;
}

Algorithm for moving pixels in bulk?

Currently i'm trying to produce a simple 2D map generation program, and it is pretty much finished apart from one key thing; The movement of the generated islands. The way the program functions it keeps all the islands in the middle of the map separated by colour like in some like disco ball of puke thing, but my main problem is trying to move the islands into new locations.
The program should randomly place the islands in new places based on colour, but i am having a considerable amount of difficulty doing this, as all solutions i have attempted have either fell on their face in a tsunami of 'index out of bounds of the array' errors or have worked, but taken literal hours to move a single island.
TLDR; Do any algorithms exist that would potentially allow me to move shapes made of pixels to random locations while keeping their existing shapes? mine suck.
Edit: I will try and rewrite this to be easier to read later since i'm in a rush, but in essence it reads all the pixels from the circle using .getpixel and stores them in an array based on their colour, it then generates a random location and runs the same code again, only this time it will accept a colour as an argument and will place the colour at the pixel relative to the centre of the circle if it finds a colour that is the same as the colour it is currently accepting.
In theory this should go through every colour and generate a new position for each one that maintains the shape of the island upon generation, but in practice it just takes forever.
//Thoughts - use the circle generator as a radar to find all the seperate colors, then for each color randomly generate an x and a y. then use the circle generator but only apply the colors that are selected
if (tempi >= 716 || tempib > 0)
{
if(tempib <= 0)
{
tempi = 0;
tempib = 1;
randxb = Rander.Next(10, xlen - 10);
randyb = Rander.Next(10, ylen - 10);
}
tempi += 1;
day += 1;
if(day >= 31)
{
month += 1;
day = 1;
}
if(month >= 13)
{
year += 1;
month = 1;
}
AD = "PF";
era = "Prehistoric era";
age = "Islandic Age";
Point temppb = new Point(randxb, randyb);
if (colours[tempib] == Color.DarkBlue || colours[tempib] == Color.FromArgb(0, 0, 0))
{
tempib += 1;
}
else
{
Radar(0, temppb, "write", colours[tempib]);
}
tempi = 0;
tempib += 1;
randxb = Rander.Next(10, xlen - 10);
randyb = Rander.Next(10, ylen - 10);
if (tempib >= islandnuma)
{
age = "Neanderthalic Age";
}
}
else
{
year += Rander.Next(1, 3);
day = 1;
AD = "PF";
era = "Prehistoric era";
Point tempp = new Point(xlen / 2 - 150, ylen / 2 - 150);
tempi += 1;
Radar(tempi, tempp, "scan", Color.White);
if(tempi >= 716)
{
clearmap();
}
}
}
This is the terrible algorithm it calls
Color[,] scanresults = new Color[717, 4499]; //shell, place in shell
private void Radar(int circle, Point pos, string mode, Color col) //Fuck this doesnt work i need to change it
{
using (var g = Graphics.FromImage(pictureBox1.Image))
{
if (mode == "scan")
{
int mj = 0;
if (circle <= 716)
{
for (double i = 0.0; i < 360.0; i += 0.1)
{
mj += 1;
int radius = circle / 2; //max size = 716
double angle = i * System.Math.PI / 180;
int x = pos.X - (int)(radius * System.Math.Cos(angle));
int y = pos.Y - (int)(radius * System.Math.Sin(angle));
Color m = Map.GetPixel(x, y);
scanresults[circle, mj] = Map.GetPixel(x, y);
}
}
else
{
return;
}
}
else
{
if(mode == "write")
{
for(int c2 = 0; c2 <= 716; c2++)
{
int bmj = 0;
for (double i = 0.0; i < 360.0; i += 0.1)
{
try
{
if (mode == "write")
{
bmj += 1;
int radius = (716 - c2) / 2; //max size = 716
double angle = i * System.Math.PI / 180;
int x = pos.X - (int)(radius * System.Math.Cos(angle));
int y = pos.Y - (int)(radius * System.Math.Sin(angle));
if (scanresults[c2, bmj] == col)
{
Map.SetPixel(x, y, col);
}
}
}
catch (Exception em)
{
Console.Write("error: " + em);
}
//Color m = Map.GetPixel(x, y);
//scanresults[circle, mj] = Map.GetPixel(x, y);
}
}
}
}
//Dont hate me im defensive about my terrible coding style
}
}

Detecting peaks in images

I got a large set of infrared images of seeds, their sizes vary slightly.
And I would like to find them (in thefastest possible way).
Below i show zoomed in details of the images i process.
After a first noise removal and blob filter this is what i have :
The bright white is just direct reflection of the IR lamp, white pixels never combine (stretch out) over multiple seeds.
To make it more clear i placed a letter on some individual seeds.
The problems i have:
A is a single seed (dirt on the seed) generates a slight dark line.
B the nearby X close to it is at its darkest intersection its still brighter as some other seeds (cannt change brightnes or remove if gray value is lower then a certain value.
C those are 3 seeds close to each other.
The smallest visible seeds above should not becomme even smaller.
I'm not making use of mathlab or openCV, as i work directly with locked image and memory data. i can acces pixel data by array or simple getpixel / putpixel commands. I wrote my own graphics library which is fast enough for live camera data, processing speed currently at around 13ms at around 25ms i enter stream processing lag
I wonder how to separate those 'cloudy' blobs better.
I'm thinking to find local maxima over a certain pixels range..but that should see A as one seed, while on B find that B and X are not connected.
So I'm not sure here, how such a local peek function or another function should like like. Although i code in C# i looked at other c++ functions as well like dilate etc but thats not it. I also wrote a function to check for slope degree (like if it was a mountain height image) but that couldnt devide areas B and C.
Ok well i made different slope detection code, now i dont look for some degree but just the tilting point over a small range, it works nice on X axis.. but essentially i think it should work on both X and Y
here's the new result :
It can resolve issue A and B !!!
However it wouldnt differentiate between seeds who are aligned in a vertical row, and and it causes small white noise (non connected lines). at places where is there is nearly nothing to detect. I'm not yet sure on how to do the same (combined) over Y axis to get the tops then erase stuff from a certain distance of the top.. (to seperate).
Using this code just showing the pixel operations of it.
for (int y = raw.Height;y>5; y--)
{slopeUP = false;
int[] peek = new int[raw.Width];
for (int x = raw.Width; x> 7; x--)
{
int a = raw.GetPixelBleu(x, y);
int b = raw.GetPixelBleu(x - 1, y);
int c = raw.GetPixelBleu(x - 2, y);
int d = raw.GetPixelBleu(x - 11, y);
int f = raw.GetPixelBleu(x - 12, y);
if ((f + d) > (a + b))slopeUP = true;
if ((f + d) < (a + b))
{
if (slopeUP)
{
slopeUP = false;
peek[x - 6] = 10;
//raw.SetPixel(x, y, Color.GreenYellow);
}
else peek[x - 6] = 0;
}
}
for (int x = raw.Width; x > 7; x--)
{ if (peek[x-1] > 5) raw.SetPixel(x, y, Color.Lavender); }
}
In this SO answer to a similar question I applied persistent homology to find peaks in an image. I took your image, scaled it down to 50%, applied an Guassian blur with radius 20 (in Gimp), and applied the methods described in the other article (click to enlarge):
I only show peaks with persistence (see the other SO answer) of at least 20. The persistence diagram is shown here:
The 20-th peak would be the little peak on the top-left corner with a persistence of about 9. By applying a stronger Gaussian filter the image gets more diffuse and the peaks will be come more prominent.
Python code can be found here.
So, as far as speed, I am only going off of the image you posted here... on which everything runs blazing fast because it is tiny. Note that I padded the image after binarizing and never un-padded, so you will want to either un-pad or shift your results accordingly. You may not even want to pad, but it allows detection of cut off seeds.
Overview of pipeline: removeSaturation>>gaussian blur>>binarize>>padd>>distanceTransform>>peaks>>clustering
That being said here is my code and results:
void drawText(Mat & image);
void onMouse(int event, int x, int y, int, void*);
Mat bruteForceLocalMax(Mat srcImage, int searchRad);
void zoomPixelImage(Mat sourceImage, int multFactor, string name, bool mouseCallback);
Mat mergeLocalPeaks(Mat srcImage, int mergeRadius);
Mat image;
bool debugDisplays = false;
int main()
{
cout << "Built with OpenCV " << CV_VERSION << endl;
TimeStamp precisionClock = TimeStamp();
image = imread("../Raw_Images/Seeds1.png",0);
if (image.empty()) { cout << "failed to load image"<<endl; }
else
{
zoomPixelImage(image, 5, "raw data", false);
precisionClock.labeledlapStamp("image read", true);
//find max value in image that is not oversaturated
int maxVal = 0;
for (int x = 0; x < image.rows; x++)
{
for (int y = 0; y < image.cols; y++)
{
int val = image.at<uchar>(x, y);
if (val >maxVal && val !=255)
{
maxVal = val;
}
}
}
//get rid of oversaturation regions (as they throw off processing)
image.setTo(maxVal, image == 255);
if (debugDisplays)
{zoomPixelImage(image, 5, "unsaturated data", false);}
precisionClock.labeledlapStamp("Unsaturate Data", true);
Mat gaussianBlurred = Mat();
GaussianBlur(image, gaussianBlurred, Size(9, 9), 10, 0);
if (debugDisplays)
{zoomPixelImage(gaussianBlurred, 5, "blurred data", false);}
precisionClock.labeledlapStamp("Gaussian", true);
Mat binarized = Mat();
threshold(gaussianBlurred, binarized, 50, 255, THRESH_BINARY);
if (debugDisplays)
{zoomPixelImage(binarized, 5, "binarized data", false);}
precisionClock.labeledlapStamp("binarized", true);
//pad edges (may or may not be neccesary depending on setup)
Mat paddedImage = Mat();
copyMakeBorder(binarized, paddedImage, 1, 1, 1, 1, BORDER_CONSTANT, 0);
if (debugDisplays)
{zoomPixelImage(paddedImage, 5, "padded data", false);}
precisionClock.labeledlapStamp("add padding", true);
Mat distTrans = Mat();
distanceTransform(paddedImage, distTrans, CV_DIST_L1,3,CV_8U);
if (debugDisplays)
{zoomPixelImage(distTrans, 5, "distanceTransform", true);}
precisionClock.labeledlapStamp("distTransform", true);
Mat peaks = Mat();
peaks = bruteForceLocalMax(distTrans,10);
if (debugDisplays)
{zoomPixelImage(peaks, 5, "peaks", false);}
precisionClock.labeledlapStamp("peaks", true);
//attempt to cluster any colocated peaks and find the best clustering count
Mat mergedPeaks = Mat();
mergedPeaks = mergeLocalPeaks(peaks, 5);
if (debugDisplays)
{zoomPixelImage(mergedPeaks, 5, "peaks final", false);}
precisionClock.labeledlapStamp("final peaks", true);
precisionClock.fullStamp(false);
waitKey(0);
}
}
void drawText(Mat & image)
{
putText(image, "Hello OpenCV",
Point(20, 50),
FONT_HERSHEY_COMPLEX, 1, // font face and scale
Scalar(255, 255, 255), // white
1, LINE_AA); // line thickness and type
}
void onMouse(int event, int x, int y, int, void*)
{
if (event != CV_EVENT_LBUTTONDOWN)
return;
Point pt = Point(x, y);
std::cout << "x=" << pt.x << "\t y=" << pt.y << "\t value=" << int(image.at<uchar>(y,x)) << "\n";
}
void zoomPixelImage(Mat sourceImage, int multFactor, string name, bool normalized)
{
Mat zoomed;// = Mat::zeros(sourceImage.rows*multFactor, sourceImage.cols*multFactor, CV_8U);
resize(sourceImage, zoomed, Size(sourceImage.cols*multFactor, sourceImage.rows*multFactor), sourceImage.cols*multFactor, sourceImage.rows*multFactor, INTER_NEAREST);
if (normalized) { normalize(zoomed, zoomed, 0, 255, NORM_MINMAX); }
namedWindow(name);
imshow(name, zoomed);
}
Mat bruteForceLocalMax(Mat srcImage, int searchRad)
{
Mat outputArray = Mat::zeros(srcImage.rows, srcImage.cols, CV_8U);
//global search top
for (int x = 0; x < srcImage.rows - 1; x++)
{
for (int y = 0; y < srcImage.cols - 1; y++)
{
bool peak = true;
float centerVal = srcImage.at<uchar>(x, y);
if (centerVal == 0) { continue; }
//local search top
for (int a = -searchRad; a <= searchRad; a++)
{
for (int b = -searchRad; b <= searchRad; b++)
{
if (x + a<0 || x + a>srcImage.rows - 1 || y + b < 0 || y + b>srcImage.cols - 1) { continue; }
if (srcImage.at<uchar>(x + a, y + b) > centerVal)
{
peak = false;
}
if (peak == false) { break; }
}
if (peak == false) { break; }
}
if (peak)
{
outputArray.at<uchar>(x, y) = 255;
}
}
}
return outputArray;
}
Mat mergeLocalPeaks(Mat srcImage, int mergeRadius)
{
Mat outputArray = Mat::zeros(srcImage.rows, srcImage.cols, CV_8U);
//global search top
for (int x = 0; x < srcImage.rows - 1; x++)
{
for (int y = 0; y < srcImage.cols - 1; y++)
{
float centerVal = srcImage.at<uchar>(x, y);
if (centerVal == 0) { continue; }
int aveX = x;
int aveY = y;
int xCenter = -1;
int yCenter = -1;
while (aveX != xCenter || aveY != yCenter)
{
xCenter = aveX;
yCenter = aveY;
aveX = 0;
aveY = 0;
int peakCount = 0;
//local search top
for (int a = -mergeRadius; a <= mergeRadius; a++)
{
for (int b = -mergeRadius; b <= mergeRadius; b++)
{
if (xCenter + a<0 || xCenter + a>srcImage.rows - 1 || yCenter + b < 0 || yCenter + b>srcImage.cols - 1) { continue; }
if (srcImage.at<uchar>(xCenter + a, yCenter + b) > 0)
{
aveX += (xCenter + a);
aveY += (yCenter + b);
peakCount += 1;
}
}
}
double dCentX = ((double)aveX / (double)peakCount);
double dCentY = ((double)aveY / (double)peakCount);
aveX = floor(dCentX);
aveY = floor(dCentY);
}
outputArray.at<uchar>(xCenter, yCenter) = 255;
}
}
return outputArray;
}
speed:
debug images:
results:
Hope this helps! Cheers!

How to detect black Bullets in Image?

Given the following image, How can I detect black bullets (90 bullet) in this image using C#, EmguCV or AForge?
I tried to use GetPixel(x,y) method but it checks only pixel by pixel, it is very slow and I need to detect the bullets not pixels.
Algorithm/Idea 1
You can divide your image in squares as shown in this sample:
With this logic you only have to check every 20th pixel. As soon as you know where the first dot is, you will know that every other dot have to be in the same horizontal line (in your provided sample).
A sample algorithm would look similar to this (please note that it need further improvement):
Bitmap myBitmap = new Bitmap ("input.png");
int skipX = 12;
int skipY = 12;
int detectedDots = 0;
for (int x = 0; x < myBitmap.Width; x += skipX) {
for (int y = 0; y < myBitmap.Height; y += skipY) {
Color pixelColor = myBitmap.GetPixel (x, y);
if (pixelColor.R + pixelColor.G + pixelColor.B == 0) {
myBitmap.SetPixel (x, y, Color.Red);
detectedDots++;
}
}
}
myBitmap.Save ("output.png");
Console.WriteLine (detectedDots + " dots detected");
I added an output so you can check which dots got detected (all containing red pixels).
Further improvement would be to find the center of a dot. After that you should know the width (and height) and can start at first upper left dot with an offset of the dots width.
Algorithm 2
The second algorithm analyses each pixel and is a lot easier to implement. As soon as there's a black pixel it checks if there was a black pixel in the same vertical or horizontal line before and skips in this case until there is no black pixel in line.
Further improvement would be to store the height of the first dot and to make the condition in the middle of the snippet more beautiful.
Stopwatch watch = new Stopwatch(); watch.Start();
Bitmap myBitmap = new Bitmap ("input.png");
int dotsDetected = 0;
List<int> xFound = new List<int>();
for (int x = 0; x < myBitmap.Width; x++) {
bool yFound = false;
bool dotFound = false;
for (int y = 0; y < myBitmap.Height; y++) {
Color pixelColor = myBitmap.GetPixel (x, y);
if (pixelColor.R + pixelColor.G + pixelColor.B == 0) {
dotFound = true;
if (yFound)
continue;
if (xFound.Contains (y)
|| xFound.Contains (y + 1)
|| xFound.Contains (y + 2)
|| xFound.Contains (y + 3)
|| xFound.Contains (y + 4)
|| xFound.Contains (y - 1)
|| xFound.Contains (y - 2)
|| xFound.Contains (y - 3)
|| xFound.Contains (y - 4)) {
yFound = true;
continue;
}
xFound.Add (y);
//myBitmap.SetPixel (x, y, Color.Red);
dotsDetected++;
yFound = true;
} else
yFound = false;
}
if(!dotFound) //no dot found in this line
xFound.Clear();
}
//myBitmap.Save ("output.png");
watch.Stop();
Console.WriteLine("Picture analyzed in " + watch.Elapsed.TotalSeconds.ToString("#,##0.0000"));
Console.WriteLine (dotsDetected + " dots detected");
Unless you are doing this to learn more about image processing, do not re-invent the wheel. Just use emgucv (or a similar library). The emgucv syntax is rather unfriendly (mostly because of its underlying Win32 OpenCV implementation), but basically it comes down to
Contour<Point> contour = img.FindContours(CV_CHAIN_APPROX_TC89_L1, RETR_TYPE.CV_RETR_LIST);
for (; contour != null; contour = contour.HNext)
{
// You now have the contours. These have characteristics like a boundingRect, which is an easy way to approach the center of a circle.
}
I created a full solution for the problem (relying only on Bitmap.GetPixel(Int32,Int32)).
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Drawing;
namespace StackOverflow
{
public static class Program
{
static void Main(string[] args)
{
const String PATH = #"C:\sim\sbro6.png";
Stopwatch watch = new Stopwatch(); watch.Start();
List<Bitmap> l_new, l_old;
{
Bitmap bmp = Image.FromFile(PATH) as Bitmap;
// Initialization
l_old = new List<Bitmap>();
l_new = new List<Bitmap>(); l_new.Add(bmp);
// Splitting
while (l_new.Count > l_old.Count)
{
l_old = l_new; l_new = new List<Bitmap>();
l_new.AddRange(SplitBitmapsVertically(SplitBitmapsHorizontally(l_old)));
}
// for (Int32 i = 0; i < l_new.Count; i++)
// {
// l_new[i].Save(#"C:\sim\bitmap_" + i + ".bmp");
// }
}
watch.Stop();
Console.WriteLine("Picture analyzed in ".PadRight(59,'.') + " " + watch.Elapsed.TotalSeconds.ToString("#,##0.0000"));
Console.WriteLine("Dots found ".PadRight(59, '.') + " " + l_new.Count);
Console.WriteLine();
Console.WriteLine("[ENTER] terminates ...");
Console.ReadLine();
}
static List<Bitmap> SplitBitmapsVertically(List<Bitmap> l_old)
{
Int32 x_start = -1; Bitmap bmp; Boolean colContainsData = false;
List<Bitmap> l = new List<Bitmap>();
foreach(Bitmap b in l_old)
{
for (Int32 x = 0; x < b.Width; x++)
{
colContainsData = false;
for (Int32 y = 0; y < b.Height; y++)
{
if (b.GetPixel(x, y).ToArgb() != Color.White.ToArgb())
{
colContainsData = true;
}
}
if (colContainsData) if (x_start < 0) { x_start = x; }
if (!colContainsData || (x == (b.Width - 1))) if (x_start >= 0)
{
bmp = new Bitmap(x - x_start, b.Height);
for (Int32 x_tmp = x_start; x_tmp < x; x_tmp++)
for (Int32 y_tmp = 0; y_tmp < b.Height; y_tmp++)
{
bmp.SetPixel(x_tmp - x_start, y_tmp, b.GetPixel(x_tmp, y_tmp));
}
l.Add(bmp); x_start = -1;
}
}
}
return l;
}
static List<Bitmap> SplitBitmapsHorizontally(List<Bitmap> l_old)
{
Int32 y_start = -1; Bitmap bmp; Boolean rowContainsData = false;
List<Bitmap> l = new List<Bitmap>();
foreach (Bitmap b in l_old)
{
for (Int32 y = 0; y < b.Height; y++)
{
rowContainsData = false;
for (Int32 x = 0; x < b.Width; x++)
{
if (b.GetPixel(x, y).ToArgb() != Color.White.ToArgb())
{
rowContainsData = true;
}
}
if (rowContainsData) if (y_start < 0) { y_start = y; }
if (!rowContainsData || (y == (b.Height - 1))) if (y_start >= 0)
{
bmp = new Bitmap(b.Width, y - y_start);
for (Int32 x_tmp = 0; x_tmp < b.Width; x_tmp++)
for (Int32 y_tmp = y_start; y_tmp < y; y_tmp++)
{
bmp.SetPixel(x_tmp, y_tmp - y_start, b.GetPixel(x_tmp, y_tmp));
}
l.Add(bmp); y_start = -1;
}
}
}
return l;
}
}
}
It takes roughly half a second to process the image (see attached picture)
The idea is to iteratively split the provided image into rows and colums that do only contain a subset of dots until there is exactly one dot contained.
The dots may be arbitrarily spread over the image. Hope this helps

Render points of Both a filled and unfilled Ellipse to Array in C#

Does anyone know of any code to render an Ellipse to an array in C#? I had a look about, I couldn't find anything that answered my problem.
Given the following array:
bool[,] pixels = new bool[100, 100];
I'm looking for functions to render both a hollow and filled ellipse within a rectangular area. e.g:
public void Ellipse(bool[,] pixels, Rectangle area)
{
// fill pixels[x,y] = true here for the ellipse within area.
}
public void FillEllipse(bool[,] pixels, Rectangle area)
{
// fill pixels[x,y] = true here for the ellipse within area.
}
Ellipse(pixels, new Rectangle(20, 20, 60, 60));
FillEllipse(pixels, new Rectangle(40, 40, 20, 20));
Any help would be greatly appreciated.
Something like this should do the trick
public class EllipseDrawer
{
private static PointF GetEllipsePointFromX(float x, float a, float b)
{
//(x/a)^2 + (y/b)^2 = 1
//(y/b)^2 = 1 - (x/a)^2
//y/b = -sqrt(1 - (x/a)^2) --Neg root for upper portion of the plane
//y = b*-sqrt(1 - (x/a)^2)
return new PointF(x, b * -(float)Math.Sqrt(1 - (x * x / a / a)));
}
public static void Ellipse(bool[,] pixels, Rectangle area)
{
DrawEllipse(pixels, area, false);
}
public static void FillEllipse(bool[,] pixels, Rectangle area)
{
DrawEllipse(pixels, area, true);
}
private static void DrawEllipse(bool[,] pixels, Rectangle area, bool fill)
{
// Get the size of the matrix
var matrixWidth = pixels.GetLength(0);
var matrixHeight = pixels.GetLength(1);
var offsetY = area.Top;
var offsetX = area.Left;
// Figure out how big the ellipse is
var ellipseWidth = (float)area.Width;
var ellipseHeight = (float)area.Height;
// Figure out the radiuses of the ellipses
var radiusX = ellipseWidth / 2;
var radiusY = ellipseHeight / 2;
//Keep track of the previous y position
var prevY = 0;
var firstRun = true;
// Loop through the points in the matrix
for (var x = 0; x <= radiusX; ++x)
{
var xPos = x + offsetX;
var rxPos = (int)ellipseWidth - x - 1 + offsetX;
if (xPos < 0 || rxPos < xPos || xPos >= matrixWidth)
{
continue;
}
var pointOnEllipseBoundCorrespondingToXMatrixPosition = GetEllipsePointFromX(x - radiusX, radiusX, radiusY);
var y = (int) Math.Floor(pointOnEllipseBoundCorrespondingToXMatrixPosition.Y + (int)radiusY);
var yPos = y + offsetY;
var ryPos = (int)ellipseHeight - y - 1 + offsetY;
if (yPos >= 0)
{
if (xPos > -1 && xPos < matrixWidth && yPos > -1 && yPos < matrixHeight)
{
pixels[xPos, yPos] = true;
}
if(xPos > -1 && xPos < matrixWidth && ryPos > -1 && ryPos < matrixHeight)
{
pixels[xPos, ryPos] = true;
}
if (rxPos > -1 && rxPos < matrixWidth)
{
if (yPos > -1 && yPos < matrixHeight)
{
pixels[rxPos, yPos] = true;
}
if (ryPos > -1 && ryPos < matrixHeight)
{
pixels[rxPos, ryPos] = true;
}
}
}
//While there's a >1 jump in y, fill in the gap (assumes that this is not the first time we've tracked y, x != 0)
for (var j = prevY - 1; !firstRun && j > y - 1 && y > 0; --j)
{
var jPos = j + offsetY;
var rjPos = (int)ellipseHeight - j - 1 + offsetY;
if(jPos == rjPos - 1)
{
continue;
}
if(jPos > -1 && jPos < matrixHeight)
{
pixels[xPos, jPos] = true;
}
if(rjPos > -1 && rjPos < matrixHeight)
{
pixels[xPos, rjPos] = true;
}
if (rxPos > -1 && rxPos < matrixWidth)
{
if(jPos > -1 && jPos < matrixHeight)
{
pixels[rxPos, jPos] = true;
}
if(rjPos > -1 && rjPos < matrixHeight)
{
pixels[rxPos, rjPos] = true;
}
}
}
firstRun = false;
prevY = y;
var countTarget = radiusY - y;
for (var count = 0; fill && count < countTarget; ++count)
{
++yPos;
--ryPos;
// Set all four points in the matrix we just learned about
// also, make the indication that for the rest of this row, we need to fill the body of the ellipse
if(yPos > -1 && yPos < matrixHeight)
{
pixels[xPos, yPos] = true;
}
if(ryPos > -1 && ryPos < matrixHeight)
{
pixels[xPos, ryPos] = true;
}
if (rxPos > -1 && rxPos < matrixWidth)
{
if(yPos > -1 && yPos < matrixHeight)
{
pixels[rxPos, yPos] = true;
}
if(ryPos > -1 && ryPos < matrixHeight)
{
pixels[rxPos, ryPos] = true;
}
}
}
}
}
}
Although there already seems to be a perfectly valid answer with source code and all to this question, I just want to point out that the WriteableBitmapEx project also contains a lot of efficient source code for drawing and filling different polygon types (such as ellipses) in so-called WriteableBitmap objects.
This code can easily be adapted to the general scenario where a 2D-array (or 1D representation of a 2D-array) should be rendered in different ways.
For the ellipse case, pay special attention to the DrawEllipse... methods in the WriteableBitmapShapeExtensions.cs file and FillEllipse... methods in the WriteableBitmapFillExtensions.cs file, everything located in the trunk/Source/WriteableBitmapEx sub-folder.
This more applies to all languages in general, and I'm not sure why you're looking for things like this in particular rather than using a pre-existing graphics library (homework?), but for drawing an ellipse, I would suggest using the midpoint line drawing algorithm which can be adapted to an ellipse (also to a circle):
http://en.wikipedia.org/wiki/Midpoint_circle_algorithm
I'm not sure I fully agree that it's a generalisation of Bresenham's algorithm (certainly we were taught that Bresenham's and the Midpoint algorithm are different but proved to produce identical results), but that page should give you a start on it. See the link to the paper near the bottom for an algorithm specific to ellipses.
As for filling the ellipse, I'd say your best bet is to take a scanline approach - look at each row in turn, work out which pixels the lines on the left and right are at, and then fill every pixel inbetween.
The simplest thing to do would do is iterate over each element of your matrix, and check whether some ellipse equation evaluates to true
taken from http://en.wikipedia.org/wiki/Ellipse
What I would start with is something resembling
bool[,] pixels = new bool[100, 100];
double a = 30;
double b = 20;
for (int i = 0; i < 100; i++)
for (int j = 0; j < 100; j++ )
{
double x = i-50;
double y = j-50;
pixels[i, j] = (x / a) * (x / a) + (y / b) * (y / b) > 1;
}
and if your elipse is in reverse, than just change the > to <
For a hollow one, you can check whether the difference between (x / a) * (x / a) + (y / b) * (y / b) and 1 is within a certain threshold. If you just change the inequality to an equation, it will probably miss some pixels.
Now, I haven't actually tested this fully, so I don't know if the equation being applied correctly, but I just want to illustrate the concept.

Categories

Resources