fixing color saturation using histogram normalization - c#

I am developing a filter to work on images. After applying this filter to images, I am getting some RGB values which are more than 255 or less than 0. I can saturate this pixels to 255 and 0 respectively, but then the image is not good.
I want to find a way to normalize the pixel histograms so after normalization the values for RGB be between 0,255.
The code is as follow:
double max = outputPixel.Cast<double>().Max();
double targetMax = 300;
double min = outputPixel.Cast<double>().Min();
double targetMin = -50;
for (int c = 0; c < 3; c++)
{
for (int i = 0; i < outputPixel.GetLength(0); i++)
{
for (int j = 0; j < outputPixel.GetLength(1); j++)
{
outputPixel[i, j, c] = wb[c] * (((outputPixel[i, j, c] - min) * (targetMax-targetMin) / (max - min))+targetMin);
outputPixel[i, j, c] = Saturate(outputPixel[i, j, c]);
}
}
}
}
The outputPixel defined as
private static double[, ,] outputPixel;
and initialized as follow:
outputPixel = new double[inputImage.Width, inputImage.Height, 3];
The problem is:
If I don't do histogram normalization, then there is a lot of specke type noise on dark areas and most of the image is bleached out.
If I use histogram normalization, then I can see a red tint on white area (mainly cloud area).
One solution is to normalize histogram in HSB, but I can not use internal C# calculation for HSB as values for RGB are bigger (or smalle) than 255 (0).
What Can I do in C#?

Related

Approximate image from byte array

I wrote an application where some dots are floating around and if i assign a dot a Point, it will move on this position. Now i want to load an image, convert it to a monochrome image (Only pure black or white pixels - no shades of gray) and make each dot floating to a position where its representing a black pixel.
I've already done loading and converting a image that way and extraceted the pixels as a 1 dimensional byte[]. I've managed to iterate though this array with the following code:
int stride = width * 4;
for (int y = 0; y < height; y++)
for (int x = 0; x < width; x++)
{
int index = y * stride + 4 * x;
// array[index] <- Red
// array[index + 1] <- Green
// array[index + 2] <- Blue
// array[index + 3] <- Alpha
}
The byte array holds every pixel with 4 bytes (RGBA). So the array length is ImageHeight*ImageWidth*4 bytes. Either a pixel is Black (0, 0, 0, 255) or White (255, 255, 255, 255).
My problem now is that i'm not able to correctly approximate the black areas of the image with just n dots. In most cases I will have much less floating dots than there are black pixels in the array. So what i need is a method that gives me a Point[] that contains n Points that will represent only the black areas of the image as good as possible. Can someone help me out?
Loop though the array and find the points that their red, green and blue are 0
to get the Black dots:
List<Point> blackPoints = new List<Points>()
for(int i=0; i<array.Length; i+=4)
if(array[i] == 0 && array[i+1] == 0 && array[i+2] ==0) //alpha is not important
{
int tmp = i / 4;
blackPoints.Add(new Point(tmp%width, tmp/width));
}
Create Methods, to get the weight of a pixel based on its own and neighbors colors, and also a Method to find the weight of a block:
public static class Exts
{
public static int Weight(this points ps, int x, int y, int width, int height)
{
int weight = 0;
for(int i=Math.Max(x - 1, 0); i<Math.Min(width, x+1))
for(int j= Math.Max(y-1, 0), j<Math.Min(height, y+1))
if(ps.Any(a => a.X == i && a.Y == j)) weight++;
return weight;
}
public static int BlockWeight(this Point[] ps, int x, int y)
{
return ps.Count(a => a.X <= x+2 && a.Y<= y+2);
}
}
Now loop through the bitmap, with blocks of nine pixels (3x3) and if a blocks wieght is more than half (in this case more than or equal to 5), select a the point in this block that has heighest weight, to represent the black point:
List<Point> result = new List<Point>();
for(int i=0; i<width; i+=3)
for(int j=0; j< height; j+=3)
if(points.BlockWeight(i,j) >= 5)
result.Add(ps.Where(a => a.X <= x+2 && a.Y<= y+2).OrderByDescending(a => a.Weight(i, j, width, height)).First())

creating gabor texture in c#

I am trying to create a texture (in a 3-D byte array) that is a coloured gabor patch. I am using OpenTK to map the texture. The texture mapping is working fine, but the texture that is created by my code below is not what I need.
The code I have come up with is as follows:
for (int x = 0; x < size; x++)
{
for (int y = 0; y < size; y++)
{
double sin_term = 0.5*(double)Math.Sin(10 * 3.14159 * ((double)x / (double)size));
sin_term += 0.5;
double gauss = 0.5+Math.Exp(-((Math.Pow(x,2)+Math.Pow(y,2))/(2*Math.Pow(sigma,2))));
double gabor = sin_term * gauss;
byteTexture2[j,i,0] = (byte)(((double)Colour.R * gabor));
byteTexture2[j,i,1] = (byte)(((double)Colour.G * gabor));
byteTexture2[j,i,2] = (byte)(((double)Colour.B * gabor));
}
}
My maths isn't alll that good, so I may be off track but I was trying to multiply the sine wave by the gaussian. The sine wave term seems to work OK by itself but the gaussian may be where it is having problems.
Any help would be much appreciated.
Have found MATLAB code for this problem but no c/c++/c# code
Thanks.
I recently coded up a Gabor filter kernel for use in OpenCV (using C++). Here is my code for the kernel:
/// compute Gabor filter kernels
for (int i = 0; i < h; i++){
x = i - 0.5*(h - 1);
for (int j = 0; j < h; j++) {
y = j - 0.5*(h - 1);
gaborKernelCos.at<float>(i, j) = exp((-16 / (h*h))*(x*x + y*y))*cos((2 * M_PI*w / h)*(x*cos(q) + y*sin(q))) / (h*h);
gaborKernelSin.at<float>(i, j) = exp((-16 / (h*h))*(x*x + y*y))*sin((2 * M_PI*w / h)*(x*cos(q) + y*sin(q))) / (h*h);
}
}
Where the input parameters are the kernel size h, wave number w, and filter orientation q. Note the wave number is related to the filter pixel wavelength by l = h/w. Also, my value for sigma is simply a constant multiple of h.
This shouldn't really produce anything wildly different from your code as far as I can tell. Does your value for sigma make sense? It should probably be at most sigma = 0.5*size.

Color resemblance for motion detection

Given 2 consecutive frames, how can I search for pixels that changed?
I tried the following:
if (old != null)
{
for (int i = 0; i < b.Width; i++)
{
for (int j = 0; j < b.Height; j++)
{
if (!b.GetPixel(i, j).Equals(old.GetPixel(i, j)))
s.SetPixel(i, j, b.GetPixel(i, j));
else
s.SetPixel(i, j, Color.White);
}
}
}
Where "old" is the previous frame and "s" is the new frame. The code basically paints the pixels that didn't change in white.
But since the webcam produces a very low quality frame almost all of the pixels change.
How can I eliminate the pixels that didn't change "greatly"?
A very basic approach is to convert your Color pixel to an 0 - 255 based grey value.
So you can compare your pixels as an integer and make some delta error difference.
Consider this method which convert a color to a integer grayscale value
private static int GreyScaleRange(Color originalColor)
{
return (int)((originalColor.R * .3) + (originalColor.G * .59)
+ (originalColor.B * .11));
}
So instead of doing equal function, you should do
int deltadifference = 5 ;
if (Math.abs((GreyScaleRange(b.GetPixel(i, j)) - (GreyScaleRange(old.GetPixel(i, j)) > deltadifference)
s.SetPixel(i, j, b.GetPixel(i, j));
else
s.SetPixel(i, j, Color.White);

Calculating brightness of an image in grey scale (32bbparb)

everyone. I was trying to calculate the brightness of a laser spot image, the image was originally in green colour as the laser is monochromatic in green colour. but the photo was then converted to grey scale image, using grey scale filter, I tried to define two for- loop to get the pixel value of the picture, but it seems I got something really unexpected, I have not much clue what's going on. I think I need someone to shed a light for me.
Situation: I have a black and white image, the algorithm below didn't give the brightest point i.e. the white spot, instead it points at some random position ( which is not black nor white).
EDIT:I use for loop to loop over the picture to find the pixel values.
for (int i = xstart; i < xend; i++)
{
for (int j = ystart; j < yend; j++)
{
Color pixelColor = myBitmap.GetPixel(i, j);
brightness = pixelColor.GetBrightness();
//brightness = 0.2126 * pixelColor.R + 0.7152 * pixelColor.G + 0.0722 * pixelColor.B;
//brightness = 0.333 * pixelColor.R + 0.333 * pixelColor.G + (1 - 0.333 * 2) * pixelColor.B;
brightness_array[k, 0] = i;
brightness_array[k, 1] = j;
brightness_array[k, 2] = brightness;
k++;
}
}
to find brightness all these algorithms gave a wrong position for the brightest point, i wonder it's because i had an extra alpha channel for transparency which affects the result.
double max_brightness = 0.0;
int positionX = 0;
int positionY = 0;
for (int m = 0; m < k; m++)
{
if (brightness_array[m, 2] > max_brightness)
{
positionX = Convert.ToInt32(brightness_array[m, 0]);
positionY = Convert.ToInt32(brightness_array[m, 1]);
max_brightness = brightness_array[m, 2];
}
}
The above code is how I found the maximum brightness, I scan the pixel one by one, and set the new max_brightness pixel as max_brightness , so that after you loop over the whole picture, you should get the max_brightness.
The code to work out the brightest point on the image is working fine. The problem is that the picture is displayed in a picture box which scales the image down and your mousemove code to determine where position of the cursor within the picturebox doesn't match the point on the raw image.

get average color from bmp

I am developing a taskbar for the 2nd screen(something like displayfusion).
However, I'm having difficulty at getting the right average color from the icon. For example Google Chrome/ When I hover it on the main taskbar it backgrounds turns yellow. With my code it turns orange/red.
This is what it looks now:
How can I get the right dominant/average color?
I use this code to calculate the average color:
public static Color getDominantColor(Bitmap bmp)
{
//Used for tally
int r = 0;
int g = 0;
int b = 0;
int total = 0;
for (int x = 0; x < bmp.Width; x++)
{
for (int y = 0; y < bmp.Height; y++)
{
Color clr = bmp.GetPixel(x, y);
r += clr.R;
g += clr.G;
b += clr.B;
total++;
}
}
//Calculate average
r /= total;
g /= total;
b /= total;
return Color.FromArgb(r, g, b);
}
The average color is not neccessarily the color most used. I recommend calculating the HUE of pixels which have saturation over a certain threshold, and use an array to create a histogram of the image. (How many times a certain hue value was used).
Then smooth the histogram (calculate local average values with both neighbours), then get the place where this smoothed histogram takes the maximal value.
You can get HSL values with:
Color.GetHue
Color.GetSaturation
Color.GetBrightness

Categories

Resources