Issue with Kinect V2 Coordinate Mapper - c#

I'm currently undertaking a university project that involves object detection and recognition with a Kinect. Now I'm using the MapDethFrameToColorSpace method for coordinating the depth to rgb. I believe the issue is to with this loop here
for (int i = 0; i < _depthData.Length; ++i)
{
ColorSpacePoint newpoint = cPoints[i];
if (!float.IsNegativeInfinity(newpoint.X) && !float.IsNegativeInfinity(newpoint.Y))
int colorX = (int)Math.Floor(newpoint.X + 0.5);
int colorY = (int)Math.Floor(newpoint.Y + 0.5);
if ((colorX >= 0) && (colorX < colorFrameDescription.Width) && (colorY >= 0) && (colorY < colorFrameDescription.Height))
{
int j = ((colorFrameDescription.Width * colorY) + colorX) * bytesPerPixel;
int newdepthpixel = i * 4;
displaypixel[newdepthpixel] = colorpixels[(j)]; //B
displaypixel[newdepthpixel + 1] = colorpixels[(j + 1)]; //G
displaypixel[newdepthpixel + 2] = colorpixels[(j + 1)]; //R
displaypixel[newdepthpixel + 3] = 255; //A*/
}
It appears that the indexing is not correct or there are pixels/depth values missing because the output appears to be multiples of the same image but small and with a limited x index.
http://postimg.org/image/tecnvp1nx/

Let me guess: Your output image (displaypixel) is 1920x1080 pixels big? (Though from the link you posted, it seems to be 1829×948?)
That's your problem. MapDethFrameToColorSpace returns the corresponding position in the color image for each depth pixels. That means, you get 512x424 values. Putting those into a 1920x1080 image means only about 10% of the image is filled, and the part that's filled will be jumbled.
If you make your output image 512x424 pixels big instead, it should give you an image like the second on in this article.
Or you could keep your output image at 1920x1080, but instead of putting one pixel after the other, you'd also calculate the position where to put the pixel. So instead doing
int newdepthpixel = i * 4;
you'd need to do
int newdepthpixel = ((colorFrameDescription.Width * colorY) + colorX) * 4;
That would give you a 1920x1080 image, but with only 512x424 pixels filled, with lots of space in between.

Related

Wrong output pixel colors (grayscale) using EvilDICOM

I'm using Evil-DICOM to construct a 2d image in unity (i.e texture2d). The output pixel values are wrong compared to what I got from other DICOM viewers. I'm new to DICOM development and couldn't figure out what I did wrong. I've been stuck on this for weeks. Any help is appreciated.
I'm using this formula from:
https://www.dabsoft.ch/dicom/3/C.11.2.1.2/
I also read this answer from:
How to Display DICOM images using EvilDICOM in c#?
Known information about the DICOM file I'm using:
Bits Allocated:16
Bits Stored:16
High Bit:15
Rows, Columns:512
Pixel Representation:0 (i.e uncompressed)
Window Center:40
Window Width:350
Rescale Intercept:-1024
Rescale Slope:1
//Convert pixel data to 8 bit grayscale
for (int i = 0; i < pixelData.Count; i += 2)
{
//original data - 16 bits unsigned
ushort pixel = (ushort)(pixelData[i] * 0xFF + pixelData[i + 1]);
double valgray = pixel;
valgray = slope * valgray + intercept; //modality lut
if (valgray <= level - 0.5 - (window - 1)/2)
{
valgray = 0;
}
else if (valgray > level - 0.5 + (window - 1)/2)
{
valgray = 255;
}
else
{
valgray = ((valgray - (level - 0.5)) / (window - 1) + 0.5);
}
//Assign valgray to RGBA
colors[i / 2].r = (byte)(valgray);
colors[i / 2].g = (byte)(valgray);
colors[i / 2].b = (byte)(valgray);
colors[i / 2].a = 0xFF //Alpha = max
}
The left is my output, the right is output from other DICOM viewer
https://drive.google.com/file/d/1IjL48_iZDXAVi4_gzG6fLN3A2td2rwfS/view?usp=sharing
I got the order of bytes in pixeldata inverted. The pixel value should be:
ushort pixel = (ushort)(pixelData[i + 1] * 256 + pixelData[i]);

Calculating brightness of an image in grey scale (32bbparb)

everyone. I was trying to calculate the brightness of a laser spot image, the image was originally in green colour as the laser is monochromatic in green colour. but the photo was then converted to grey scale image, using grey scale filter, I tried to define two for- loop to get the pixel value of the picture, but it seems I got something really unexpected, I have not much clue what's going on. I think I need someone to shed a light for me.
Situation: I have a black and white image, the algorithm below didn't give the brightest point i.e. the white spot, instead it points at some random position ( which is not black nor white).
EDIT:I use for loop to loop over the picture to find the pixel values.
for (int i = xstart; i < xend; i++)
{
for (int j = ystart; j < yend; j++)
{
Color pixelColor = myBitmap.GetPixel(i, j);
brightness = pixelColor.GetBrightness();
//brightness = 0.2126 * pixelColor.R + 0.7152 * pixelColor.G + 0.0722 * pixelColor.B;
//brightness = 0.333 * pixelColor.R + 0.333 * pixelColor.G + (1 - 0.333 * 2) * pixelColor.B;
brightness_array[k, 0] = i;
brightness_array[k, 1] = j;
brightness_array[k, 2] = brightness;
k++;
}
}
to find brightness all these algorithms gave a wrong position for the brightest point, i wonder it's because i had an extra alpha channel for transparency which affects the result.
double max_brightness = 0.0;
int positionX = 0;
int positionY = 0;
for (int m = 0; m < k; m++)
{
if (brightness_array[m, 2] > max_brightness)
{
positionX = Convert.ToInt32(brightness_array[m, 0]);
positionY = Convert.ToInt32(brightness_array[m, 1]);
max_brightness = brightness_array[m, 2];
}
}
The above code is how I found the maximum brightness, I scan the pixel one by one, and set the new max_brightness pixel as max_brightness , so that after you loop over the whole picture, you should get the max_brightness.
The code to work out the brightest point on the image is working fine. The problem is that the picture is displayed in a picture box which scales the image down and your mousemove code to determine where position of the cursor within the picturebox doesn't match the point on the raw image.

How to select by color range?

In my application I have loaded a picture and I want to be able to detect similar colors. So if I select a color I want the application to be able to find all pixels with that same (or almost the same) color. This is what I wrote for a detection system that looks in a vertical direction between the point of the mouse click and the end of the bitmap.
for (int y = mouseY; y < m_bitmap.Height; y++)
{
Color pixel = m_bitmap.GetPixel(mouseX, y);
//check if there is another color
if ((pixel.R > curcolor.R + treshold || pixel.R < curcolor.R - treshold) ||
(pixel.G > curcolor.G + treshold || pixel.G < curcolor.G - treshold) ||
(pixel.B > curcolor.B + treshold || pixel.B < curcolor.B - treshold))
{ //YESSSSS!
if ((y - ytop > minheight)&&(curcolor != Color.White)) //no white, at least 15px height
{
colorlayers.Add(new ColorLayer(curcolor, y - 1, ytop));
}
curcolor = pixel;
ytop = y;
}
}
Would this be the best way? Somehow it looks like it doesn't work too good with yellowish colors.
RGB is a 3D space.
A color far away threshold in all directions is not so similar to original one (and what is similar according to numbers may not be so similar to human beings eyes).
I would make a check using HSL (for example) where hue value as a finite 1D range, just for example:
for (int y = mouseY; y < m_bitmap.Height; y++)
{
Color pixel = m_bitmap.GetPixel(mouseX, y);
if (Math.Abs(color.GetHue() - curcolor.GetHue()) <= threshold)
{
// ...
}
}
Moreover please note that using bitmaps in this way (GetPixel() is terribly slow, take a look to this post to see a - much - faster alternative).
It might be interesting to look at how the magic wand tool in Paint.NET works.
This is how they compare 2 colors:
private static bool CheckColor(ColorBgra a, ColorBgra b, int tolerance)
{
int sum = 0;
int diff;
diff = a.R - b.R;
sum += (1 + diff * diff) * a.A / 256;
diff = a.G - b.G;
sum += (1 + diff * diff) * a.A / 256;
diff = a.B - b.B;
sum += (1 + diff * diff) * a.A / 256;
diff = a.A - b.A;
sum += diff * diff;
return (sum <= tolerance * tolerance * 4);
}
Source
The reason why yellow colors give a problem might be that RGB is not a perceptually uniform colorspace. This means that, given a distance between two points/colors in the colorspace, the perception of this color distance/difference will in general not be the same.
That said, you might want to use another color space, like HSL as suggested by Adriano, or perhaps Lab.
If you want to stick to RGB, I would suggest to calculate the euclidian distance, like this (I think it's simpler):
float distance = Math.sqrt((pixel.R-curcolor.R)^2 + (pixel.G-curcolor.G)^2 + (pixel.B-curcolor.B)^2);
if(distance < threshold)
{
// Do what you have to.
}

Determine image overall lightness

I need to overlay some texts on an image; this text should be lighter or darker based on the overall image lightness.
How to compute the overall (perceived) lightness of an image?
Found something interesting for single pixel:
Formula to determine brightness of RGB color
Solved by me:
public static double CalculateAverageLightness(Bitmap bm)
{
double lum = 0;
var tmpBmp = new Bitmap(bm);
var width = bm.Width;
var height = bm.Height;
var bppModifier = bm.PixelFormat == PixelFormat.Format24bppRgb ? 3 : 4;
var srcData = tmpBmp.LockBits(new Rectangle(0, 0, bm.Width, bm.Height), ImageLockMode.ReadOnly, bm.PixelFormat);
var stride = srcData.Stride;
var scan0 = srcData.Scan0;
//Luminance (standard, objective): (0.2126*R) + (0.7152*G) + (0.0722*B)
//Luminance (perceived option 1): (0.299*R + 0.587*G + 0.114*B)
//Luminance (perceived option 2, slower to calculate): sqrt( 0.299*R^2 + 0.587*G^2 + 0.114*B^2 )
unsafe
{
byte* p = (byte*)(void*)scan0;
for (int y = 0; y < height; y++)
{
for (int x = 0; x < width; x++)
{
int idx = (y * stride) + x * bppModifier;
lum += (0.299*p[idx + 2] + 0.587*p[idx + 1] + 0.114*p[idx]);
}
}
}
tmpBmp.UnlockBits(srcData);
tmpBmp.Dispose();
var avgLum = lum / (width * height);
return avgLum/255.0;
}
I think all you can do is measure every pixel in the image and take an average. If thats too slow for your purposes then I would suggest taking an evenly distributed sample of pixels and using that to calculate an average. You could also limit the pixels to the area where you need to draw the text.
You can load the image as a Bitmap (http://msdn.microsoft.com/en-us/library/system.drawing.bitmap.aspx) and use the GetPixel method to actually get the colour values.
How you assess the brightness is entirely up to you. I would suggest a simpler approach (say just taking the highest colour value) may actually be better as some users will perceive colour differently to the human norm (colour-blindness etc).

How to determine edges in an image optimally?

I recently was put in front of the problem of cropping and resizing images. I needed to crop the 'main content' of an image for example if i had an image similar to this:
(source: msn.com)
the result should be an image with the msn content without the white margins(left& right).
I search on the X axis for the first and last color change and on the Y axis the same thing. The problem is that traversing the image line by line takes a while..for an image that is 2000x1600px it takes up to 2 seconds to return the CropRect => x1,y1,x2,y2 data.
I tried to make for each coordinate a traversal and stop on the first value found but it didn't work in all test cases..sometimes the returned data wasn't the expected one and the duration of the operations was similar..
Any idea how to cut down the traversal time and discovery of the rectangle round the 'main content'?
public static CropRect EdgeDetection(Bitmap Image, float Threshold)
{
CropRect cropRectangle = new CropRect();
int lowestX = 0;
int lowestY = 0;
int largestX = 0;
int largestY = 0;
lowestX = Image.Width;
lowestY = Image.Height;
//find the lowest X bound;
for (int y = 0; y < Image.Height - 1; ++y)
{
for (int x = 0; x < Image.Width - 1; ++x)
{
Color currentColor = Image.GetPixel(x, y);
Color tempXcolor = Image.GetPixel(x + 1, y);
Color tempYColor = Image.GetPixel(x, y + 1);
if ((Math.Sqrt(((currentColor.R - tempXcolor.R) * (currentColor.R - tempXcolor.R)) +
((currentColor.G - tempXcolor.G) * (currentColor.G - tempXcolor.G)) +
((currentColor.B - tempXcolor.B) * (currentColor.B - tempXcolor.B))) > Threshold))
{
if (lowestX > x)
lowestX = x;
if (largestX < x)
largestX = x;
}
if ((Math.Sqrt(((currentColor.R - tempYColor.R) * (currentColor.R - tempYColor.R)) +
((currentColor.G - tempYColor.G) * (currentColor.G - tempYColor.G)) +
((currentColor.B - tempYColor.B) * (currentColor.B - tempYColor.B))) > Threshold))
{
if (lowestY > y)
lowestY = y;
if (largestY < y)
largestY = y;
}
}
}
if (lowestX < Image.Width / 4)
cropRectangle.X = lowestX - 3 > 0 ? lowestX - 3 : 0;
else
cropRectangle.X = 0;
if (lowestY < Image.Height / 4)
cropRectangle.Y = lowestY - 3 > 0 ? lowestY - 3 : 0;
else
cropRectangle.Y = 0;
cropRectangle.Width = largestX - lowestX + 8 > Image.Width ? Image.Width : largestX - lowestX + 8;
cropRectangle.Height = largestY + 8 > Image.Height ? Image.Height - lowestY : largestY - lowestY + 8;
return cropRectangle;
}
}
One possible optimisation is to use Lockbits to access the color values directly rather than through the much slower GetPixel.
The Bob Powell page on LockBits is a good reference.
On the other hand, my testing has shown that the overhead associated with Lockbits makes that approach slower if you try to write a GetPixelFast equivalent to GetPixel and drop it in as a replacement. Instead you need to ensure that all pixel access is done in one hit rather than multiple hits. This should fit nicely with your code provided you don't lock/unlock on every pixel.
Here is an example
BitmapData bmd = b.LockBits(new Rectangle(0, 0, b.Width, b.Height), System.Drawing.Imaging.ImageLockMode.ReadOnly, b.PixelFormat);
byte* row = (byte*)bmd.Scan0 + (y * bmd.Stride);
// Blue Green Red
Color c = Color.FromArgb(row[x * pixelSize + 2], row[x * pixelSize + 1], row[x * pixelSize]);
b.UnlockBits(bmd);
Two more things to note:
This code is unsafe because it uses pointers
This approach depends on pixel size within Bitmap data, so you will need to derive pixelSize from bitmap.PixelFormat
GetPixel is probably your main culprit (I recommend running some profiling tests to track it down), but you could restructure the algorithm like this:
Scan first row (y = 0) from left-to-right and right-to-left and record the first and last edge location. It's not necessary to check all pixels, as you want the extreme edges.
Scan all subsequent rows, but now we only need to search outward (from center toward edges), starting at our last known minimum edge. We want to find the extreme boundaries, so we only need to search in the region where we could find new extrema.
Repeat the first two steps for the columns, establishing initial extrema and then using those extrema to iteratively bound the search.
This should greatly reduce the number of comparisons if your images are typically mostly content. The worst case is a completely blank image, for which this would probably be less efficient than the exhaustive search.
In extreme cases, image processing can also benefit from parallelism (split up the image and process it in multiple threads on a multi-core CPU), but this is quite a bit of additional work and there are other, simpler changes you still make. Threading overhead tends to limit the applicability of this technique and is mainly helpful if you expect to run this thing 'realtime', with dedicated repeated processing of incoming data (to make up for the initial setup costs).
This won't make it better on the order... but if you square your threshold, you won't need to do a square root, which is very expensive.
That should give a significant speed increase.

Categories

Resources