Image processing task: Erosion C# - c#

I am doing an image processing assignment where I want to implement erosion and dilation algorithm. It needs to look for each pixel in all directions (in this case up, down, left and right), so i'm using a plus structuring element. Here is my problem: I've got 4 for loops nested, which makes this operation very slow.
Can anyone tell me how to make the erosion process quicker without using unsafe method?
Here is what I have:
colorlistErosion = new List<Color>();
int colorValueR, colorValueG, colorValueB;
int tel = 0;
for (int y = 0; y < bitmap.Height; y++)
{
for (int x = 0; x < bitmap.Width; x++)
{
Color col = bitmap.GetPixel(x, y);
colorValueR = col.R; colorValueG = col.G; colorValueB = col.B;
//Erosion
for (int a = -1; a < 2; a++)
{
for (int b = -1; b < 2; b++)
{
try
{
Color col2 = bitmap.GetPixel(x + a, y + b);
colorValueR = Math.Min(colorValueR, col2.R);
colorValueG = Math.Min(colorValueG, col2.G);
colorValueB = Math.Min(colorValueB, col2.B);
}
catch
{
}
}
}
colorlistErosion.Add(Color.FromArgb(0 + colorValueR, 0+colorValueG, 0+colorValueB));
}
}
for (int een = 0; een < bitmap.Height; een++)
for (int twee = 0; twee < bitmap.Width; twee++)
{
bitmap.SetPixel(twee, een, colorlistErosion[tel]);
tel++;
}

how to make the erosion process quicker without using unsafe method?
You can turn the inner loops into Parallel.For().
But I'm not 100% sure if GetPixel() and especially SetPixel() are thread-safe. That is a deal-breaker.

Your algorithm is inherently slow due to the 4 nested loops. You're also processing the bitmap using the slowest approach possible bitmap.GetPixel.
Take a look at SharpGL. If they don't have your filters you can download the source code and figure out how to make your own.

Related

Cache friendly and faster way faster - `InvokeMe()`

I am trying to figure out, if this really is the fastest approach. I want this to be as fast as possible, cache friendly, and serve a good time complexity.
DEMO: https://dotnetfiddle.net/BUGz8s
private static void InvokeMe()
{
int hz = horizontal.GetLength(0) * horizontal.GetLength(1);
int vr = vertical.GetLength(0) * vertical.GetLength(1);
int hzcol = horizontal.GetLength(1);
int vrcol = vertical.GetLength(1);
//Determine true from Horizontal information:
for (int i = 0; i < hz; i++)
{
if(horizontal[i / hzcol, i % hzcol] == true)
System.Console.WriteLine("True, on position: {0},{1}", i / hzcol, i % hzcol);
}
//Determine true position from vertical information:
for (int i = 0; i < vr; i++)
{
if(vertical[i / vrcol, i % vrcol] == true)
System.Console.WriteLine("True, on position: {0},{1}", i / vrcol, i % vrcol);
}
}
Pages I read:
Is there a "faster" way to iterate through a two-dimensional array than using nested for loops?
Fastest way to loop through a 2d array?
Time Complexity of a nested for loop that parses a matrix
Determining the big-O runtimes of these different loops?
EDIT: The code example, is now, more towards what I am dealing with. It's about determining a true point x,y from a N*N Grid. The information available at disposal is: horizontal and vertical 2D arrays.
To NOT cause confusion. Imagine, that overtime, some positions in vertical or horizontal get set to True. This works currently perfectly well. All I am in for, is, the current approach of using one for-loop per 2D array like this, instead of using two for loops per 2D array.
Time complexity for approach with one loop and nested loops is the same - O(row * col) (which is O(n^2) for row == col as in your example for both cases) so the difference in the execution time will come from the constants for operations (since the direction of traversing should be the same). You can use BenchmarkDotNet to measure those. Next benchmark:
[SimpleJob]
public class Loops
{
int[, ] matrix = new int[10, 10];
[Benchmark]
public void NestedLoops()
{
int row = matrix.GetLength(0);
int col = matrix.GetLength(1);
for (int i = 0; i < row ; i++)
for (int j = 0; j < col ; j++)
{
matrix[i, j] = i * row + j + 1;
}
}
[Benchmark]
public void SingleLoop()
{
int row = matrix.GetLength(0);
int col = matrix.GetLength(1);
var l = row * col;
for (int i = 0; i < l; i++)
{
matrix[i / col, i % col] = i + 1;
}
}
}
Gives on my machine:
Method
Mean
Error
StdDev
Median
NestedLoops
144.5 ns
2.94 ns
4.58 ns
144.7 ns
SingleLoop
578.2 ns
11.37 ns
25.42 ns
568.6 ns
Making single loop actually slower.
If you will change loop body to some "dummy" operation - for example incrementing some outer variable or updating fixed (for example first) element of the martix you will see that performance of both loops is roughly the same.
Did you consider
for (int i = 0; i < row; i++)
{
for (int j = 0; j < col; j++)
{
Console.Write(string.Format("{0:00} ", matrix[i, j]));
Console.Write(Environment.NewLine + Environment.NewLine);
}
}
It is basically the same loop as yours, but without / and % that compiler may or may not optimize.

For loop that evaluates all values at once?

So i have a cellular automaton, where i can place pixels on an image and they just move down one pixel each "tick". Now the problem is since the for loop is like this:
for(int x = 0; x < 100; x++){
for(int y = 0; y < 100; y++){
//Check if nothing below (x,y) pixel and move it down if so
}
}
Then the pixels get teleported to the bottom because they get moved down every iteration of the y loop. I solved it by making the y loop go from 100 down to 0 instead of 0 to 100, so its iterating upwards but it wont work if i want to make my pixels move upwards in certain situations.
Maybe a double loop where it makes a list of which pixels to move and where in the first one and actually do it in the second but that seems quite performance heavy and im sure there is a better solution
PS: if you have a better title for the question, let me know
You need two copies of the cells. In pseudo code:
int[] currentCells = new int[...];
int[] nextCells = new int[...];
Initialize(currentCells);
while (true) {
Draw(currentCells);
Calculate next state by using currentCells as source and store result into nextCells;
// exchange (this copies only references and is fast).
var temp = currentCells;
currentCells = nextCells;
nextCells = temp;
}
Note that we loop through each cell of the destination (nextCells) to get a new value for it. Throughout this process we never look at the cells in nextCells, because these could be moved ones already. Our source is strictly currentCells which now represents the previous (frozen) state.
// Calculate next state.
for(int x = 0; x < 100; x++){
for(int y = 0; y < 100; y++){
if(currentCells[x, y] == 0 && y > 0) { // Nothing here
// Take value from above
nextCells[x, y] = currentCells[x, y - 1];
} else {
// Just copy
nextCells[x, y] = currentCells[x, y];
}
}
}
In Conway's Game of Life, for instance, you calculate the state of a cell by analyzing the values of the surrounding cells. This means that neither working upwards nor downwards will work. By having 2 buffers, you always have a source buffer that is not changed during the calculation of the next state.
Would something like this work, assuming you've got what you want to do inside the inner for loop correct?
static void MovePixels(bool moveUp)
{
for (int x = 0; x < 100; x++)
{
if (moveUp)
{
for (int y = 0; y < 100; y++)
{
}
}
else
{
for (int y = 100; y > 0; y--)
{
}
}
}
}

Parallelize transitive reduction

I have a Dictionary<int, List<int>>, where the Key represents an element of a set (or a vertex in an oriented graph) and the List is a set of other elements which are in relation with the Key (so there are oriented edges from Key to Values). The dictionary is optimized for creating a Hasse diagram, so the Values are always smaller than the Key.
I have also a simple sequential algorithm, that removes all transitive edges (e.g. I have relations 1->2, 2->3 and 1->3. I can remove the edge 1->3, because I have a path between 1 and 3 via 2).
for(int i = 1; i < dictionary.Count; i++)
{
for(int j = 0; j < i; j++)
{
if(dictionary[i].Contains(j))
dictionary[i].RemoveAll(r => dictionary[j].Contains(r));
}
}
Would it be possible to parallelize the algorithm? I could do Parallel.For for the inner loop. However, this is not recommended (https://msdn.microsoft.com/en-us/library/dd997392(v=vs.110).aspx#Anchor_2) and the resulting speed would not increase significantly (+ there might be problems with locking). Could I parallelize the outer loop?
There is simple way to solve the parallelization problem, separate data. Read from original data structure and write to new. That way You can run it in parallel without even need to lock.
But probably the parallelization is not even necessary, the data structures are not efficient. You use dictionary where array would be sufficient (as I understand the code You have vertices 0..result.Count-1). And List<int> for lookups. List.Contains is very inefficient. HashSet would be better. Or, for more dense graphs, BitArray. So instead of Dictionary<int, List<int>> You can use BitArray[].
I rewrote the algorithm and made some optimizations. It does not make plain copy of the graph and delete edges, it just construct the new graph from only the right edges. It uses BitArray[] for input graph and List<int>[] for final graph, as the latter one is far more sparse.
int sizeOfGraph = 1000;
//create vertices of a graph
BitArray[] inputGraph = new BitArray[sizeOfGraph];
for (int i = 0; i < inputGraph.Length; ++i)
{
inputGraph[i] = new BitArray(i);
}
//fill random edges
Random rand = new Random(10);
for (int i = 1; i < inputGraph.Length; ++i)
{
BitArray vertex_i = inputGraph[i];
for(int j = 0; j < vertex_i.Count; ++j)
{
if(rand.Next(0, 100) < 50) //50% fill ratio
{
vertex_i[j] = true;
}
}
}
//create transitive closure
for (int i = 0; i < sizeOfGraph; ++i)
{
BitArray vertex_i = inputGraph[i];
for (int j = 0; j < i; ++j)
{
if (vertex_i[j]) { continue; }
for (int r = j + 1; r < i; ++r)
{
if (vertex_i[r] && inputGraph[r][j])
{
vertex_i[j] = true;
break;
}
}
}
}
//create transitive reduction
List<int>[] reducedGraph = new List<int>[sizeOfGraph];
Parallel.ForEach(inputGraph, (vertex_i, state, ii) =>
{
{
int i = (int)ii;
List<int> reducedVertex = reducedGraph[i] = new List<int>();
for (int j = i - 1; j >= 0; --j)
{
if (vertex_i[j])
{
bool ok = true;
for (int x = 0; x < reducedVertex.Count; ++x)
{
if (inputGraph[reducedVertex[x]][j])
{
ok = false;
break;
}
}
if (ok)
{
reducedVertex.Add(j);
}
}
}
}
});
MessageBox.Show("Finished, reduced graph has "
+ reducedGraph.Sum(s => s.Count()) + " edges.");
EDIT
I wrote this:
The code has some problems. With the direction i goes now, You can delete edges You would need and the result would be incorrect. This turned out to be a mistake. I was thinking this way, lets have a graph
1->0
2->1, 2->0
3->2, 3->1, 3->0
Vertex 2 gets reduced by vertex 1, so we have
1->0
2->1
3->2, 3->1, 3->0
Now vertex 3 gets reduced by vertex 2
1->0
2->1
3->2, 3->0
And we have a problem, as we can not reduce 3->0 which stayed here because of reduced 2->0. But it is my mistake, this would never happen. The inner cycle goes strictly from lower to higher, so instead
Vertex 3 gets reduced by vertex 1
1->0
2->1
3->2, 3->1
and now by vertex 2
1->0
2->1
3->2
And the result is correct. I apologize for the error.

Gaussian Blur implementation not working properly

I'm attempting to implement a simple Gaussian Blur function, however when run on the image, it just comes back as more opaque than the original; no blur takes place.
public double[,] CreateGaussianFilter(int size)
{
double[,] gKernel = new double[size,size];
for (int y = 0; y < size; y++)
for (int x = 0; x < size; x++)
gKernel[y,x] = 0.0;
// set standard deviation to 1.0
double sigma = 1.0;
double r, s = 2.0 * sigma * sigma;
// sum is for normalization
double sum = 0.0;
// generate kernel
for (int x = -size/2; x <= size/2; x++)
{
for(int y = -size/2; y <= size/2; y++)
{
r = Math.Sqrt(x*x + y*y);
gKernel[x + size/2, y + size/2] = (Math.Exp(-(r*r)/s))/(Math.PI * s);
sum += gKernel[x + size/2, y + size/2];
}
}
// normalize the Kernel
for(int i = 0; i < size; ++i)
for(int j = 0; j < size; ++j)
gKernel[i,j] /= sum;
return gKernel;
}
public void GaussianFilter(ref LockBitmap image, double[,] filter)
{
int size = filter.GetLength(0);
for (int y = size/2; y < image.Height - size/2; y++)
{
for (int x = size/2; x < image.Width - size/2; x++)
{
//Grab surrounding pixels and stick them in an accumulator
double sum = 0.0;
int filter_y = 0;
for (int r = y - (size / 2); r < y + (size / 2); r++)
{
int filter_x = 0;
for (int c = x - (size / 2); c < x + (size / 2); c++)
{
//Multiple surrounding pixels by filter, add them up and set the center pixel (x,y) to this value
Color pixelVal = image.GetPixel(c, r);
double grayVal = (pixelVal.B + pixelVal.R + pixelVal.G) / 3.0;
sum += grayVal * filter[filter_y,filter_x];
filter_x++;
}
filter_y++;
}
//set the xy pixel
image.SetPixel(x,y, Color.FromArgb(255, (int)sum,(int)sum, (int)sum));
}
}
}
Any suggestions are much appreciated. Thanks!
There are a number of things about your solution.
A convolved image getting darker generally means there is a gain in the kernel of less than 1. Though perhaps not in this case, see (5).
Gaussian blur is a separable kernel and can be performed in far less time than brute force.
Averaging RGB to gray is not an optically "correct" means of computing luminance.
getpixel, setpixel approaches are generally very slow. If you are in a language supporting pointers you should use them. Looks like C#? Use unsafe code to get access to pointers.
int() truncates - this could be your source of decreased brightness. You are in essence always rounding down.
Your nested loops in the kernel generating function contain excessive bounds adjustments. This could be much faster but better yet replaced with a separable approach.
You are convolving in a single buffer. Therefore you are convolving convoved values.
Thanks

Image Comparing and return Percentage

int DiferentPixels = 0;
Bitmap first = new Bitmap("First.jpg");
Bitmap second = new Bitmap("Second.jpg");
Bitmap container = new Bitmap(first.Width, first.Height);
for (int i = 0; i < first.Width; i++)
{
for (int j = 0; j < first.Height; j++)
{
int r1 = second.GetPixel(i, j).R;
int g1 = second.GetPixel(i, j).G;
int b1 = second.GetPixel(i, j).B;
int r2 = first.GetPixel(i, j).R;
int g2 = first.GetPixel(i, j).G;
int b2 = first.GetPixel(i, j).B;
if (r1 != r2 && g1 != g2 && b1 != b2)
{
DiferentPixels++;
container.SetPixel(i, j, Color.Red);
}
else
container.SetPixel(i, j, first.GetPixel(i, j));
}
}
int TotalPixels = first.Width * first.Height;
float dierence = (float)((float)DiferentPixels / (float)TotalPixels);
float percentage = dierence * 100;
With this portion of Code im comparing 2 Images foreach Pixels and yes it work's it return's Percentage of difference ,so it compares each Pixel of First Image with pixel in same index of Second Image .But what is wrong here ,i have a huge precision maybe it should not work like that ,the comparison ,and maybe there are some better algorithms which are faster and more flexible .
So anyone has an idea how can i transform the comparison ,should i continue with that or should i compare Colors of Each Pixels or ....
PS : If anyone has a solution how to make Parallel this code ,i would also accept it ! Like expanding this to 4 Threads would they do it faster right in a Quad Core?
One obvious change would be call GetPixel only once per Bitmap, and then work with the returned Color structs directly:
for (int i = 0; i < first.Width; ++i)
{
for (int j = 0; j < first.Height; ++j)
{
Color secondColor = second.GetPixel(i, j);
Color firstColor = first.GetPixel(i, j);
if (firstColor != secondColor)
{
DiferentPixels++;
container.SetPixel(i, j, Color.Red);
}
else
{
container.SetPixel(i, j, firstColor);
}
}
}
For speed, resize the images to something very small (16x12, for example) and do the pixel comparison. If it is a near match, then try it at higher resolution.

Categories

Resources