Transparency of subtitles in Vista / Windows 7 - c#

I implemented the EVR renderer into a player of mine to deal with bad resizing quality on Windows Vista+ and came to problems...
I have subtitle overlay problems with the EVR:
try to see what i'm talking about - you must set the EVR in options.
I used this to apply a 32bit alpha bitmap onto VMR9, using a DirectX surface:
private void SetVRM9MixerSettings(int width, int height, int lines)
{
int hr = 0;
VMR9AlphaBitmap alphaBmp;
// Set Alpha Bitmap Parameters for using a Direct3D surface
alphaBmp = new VMR9AlphaBitmap();
alphaBmp.dwFlags = VMR9AlphaBitmapFlags.EntireDDS | VMR9AlphaBitmapFlags.FilterMode;
// on unmanagedSurface the bitmap was drawn with transparency
alphaBmp.pDDS = unmanagedSurface;
alphaBmp.rDest = GetDestRectangle(width, height, lines);
alphaBmp.fAlpha = 1.0f;
alphaBmp.dwFilterMode = VMRMixerPrefs.BiLinearFiltering;
// for anaglyph half SBS
if (FrameMode == Mars.FrameMode.HalfSideBySide)
{
alphaBmp.rDest.left /= 2;
alphaBmp.rDest.right /= 2;
}
// Set Alpha Bitmap Parameters
hr = mixerBitmap.SetAlphaBitmap(ref alphaBmp);
DsError.ThrowExceptionForHR(hr);
}
Now however the project MediaFoundation.NET doesnt have the alphaBmp.pDDS pointer to set, so I cannot use a directdraw surface and need to use GDI (IF SOMEONE HAS A METHOD TO DO THIS IT WOULD BE COOL). But with GDI I cannot use 32bit alpha Bitmaps for true transparency - I only get 1 bit transparency with this approach:
private void SetEVRMixerSettings(int width, int height, int subtitleLines)
{
MFVideoAlphaBitmap alphaBmp = new MFVideoAlphaBitmap();
//alphaBitmap is a 32bit semitransparent Bitmap
Graphics g = Graphics.FromImage(alphaBitmap);
// get pointer to needed objects
IntPtr hdc = g.GetHdc();
IntPtr memDC = CreateCompatibleDC(hdc);
IntPtr hBitmap = alphaBitmap.GetHbitmap();
IntPtr hOld = SelectObject(memDC, hBitmap);
alphaBmp.GetBitmapFromDC = true;
alphaBmp.stru = memDC;
alphaBmp.paras = new MFVideoAlphaBitmapParams();
alphaBmp.paras.dwFlags = MFVideoAlphaBitmapFlags.Alpha | MFVideoAlphaBitmapFlags.SrcColorKey | MFVideoAlphaBitmapFlags.DestRect;
// calculate destination rectangle
MFVideoNormalizedRect mfNRect = new MFVideoNormalizedRect();
NormalizedRect nRect = GetDestRectangle(width, height, subtitleLines);
mfNRect.top = nRect.top;
mfNRect.left = nRect.left;
mfNRect.right = nRect.right;
mfNRect.bottom = nRect.bottom;
// used when viewing half side by side anaglyph video that is stretched to full width
if (FrameMode == Mars.FrameMode.HalfSideBySide)
{
mfNRect.left /= 2;
mfNRect.right /= 2;
}
alphaBmp.paras.nrcDest = mfNRect;
// calculate source rectangle (full subtitle bitmap)
MFRect rcSrc = new MFRect();
rcSrc.bottom = alphaBitmap.Height;
rcSrc.right = alphaBitmap.Width;
rcSrc.top = 0;
rcSrc.left = 0;
alphaBmp.paras.rcSrc = rcSrc;
// apply 1-bit transparency
System.Drawing.Color colorKey = System.Drawing.Color.Black;
alphaBmp.paras.clrSrcKey = ColorTranslator.ToWin32(colorKey);
// 90% visible
alphaBmp.paras.fAlpha = 0.9F;
// set the bitmap to the evr mixer
evrMixerBitmap.SetAlphaBitmap(alphaBmp);
// cleanup
SelectObject(memDC, hOld);
DeleteDC(memDC);
g.ReleaseHdc();
}
So the questions are:
How to use a DirectDraw surface to mix bitmaps on the EVR video
or
How to mix a semi transparent bitmap without DirectDraw?
Thank you very much!

I'll try to answer the second question...
Alpha blending is rather simple task.
Assume that alpha is in the range from 0.0 - 1.0, where 0.0 means fully transparent and 1.0 represents a fully opaque color.
R_result = R_Source * alpha + R_destination * (1.0 - alpha)
Since we don't really need floats here, we can switch alpha to a 0-255 range.
R_result = ( R_Source * alpha + R_destination * (255 - alpha) ) >> 8
You can optimize it further... it's up to you.
Of course, same applies for G and B.

Related

How to convert a colored image to a image that has only two predefined colors?

I am trying to convert a colored image to a image that only has two colors. My approach was first converting the image to a black and white image by using Aforge.Net Threshold class and then convert the black and white pixels into colors that I want. The display is on real-time so this approach introduces a significant delay. I was wondering if there's a more straightforward way of doing this.
Bitmap image = (Bitmap)eventArgs.Frame.Clone();
Grayscale greyscale = new Grayscale(0.2125, 0.7154, 0.0721);
Bitmap grayImage = greyscale.Apply(image);
Threshold threshold = new Threshold(trigger);
threshold.ApplyInPlace(grayImage);
Bitmap colorImage = CreateNonIndexedImage(grayImage);
if (colorFilter)
{
for (int y = 0; y < colorImage.Height; y++)
{
for (int x = 0; x < colorImage.Width; x++)
{
if (colorImage.GetPixel(x, y).R == 0 && colorImage.GetPixel(x, y).G == 0 && colorImage.GetPixel(x, y).B == 0)
{
colorImage.SetPixel(x, y, Color.Blue);
}
else
{
colorImage.SetPixel(x, y, Color.Yellow);
}
}
}
}
private Bitmap CreateNonIndexedImage(Image src)
{
Bitmap newBmp = new Bitmap(src.Width, src.Height, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
using (Graphics gfx = Graphics.FromImage(newBmp))
{
gfx.DrawImage(src, 0, 0);
}
return newBmp;
}
The normal way to match an image to specific colours is to use Pythagorean distance between the colours in a 3D environment with R, G and B as axes. I got a bunch of toolsets for manipulating images and colours, and I'm not too familiar with any external frameworks, so I'll just dig through my stuff and give you the relevant functions.
First of all, the colour replacement itself. This code will match any colour you give to the closest available colour on a limited palette, and return the index in the given array. Note that I left out the "take the square root" part of the Pythagorean distance calculation; we don't need to know the actual distance, we only need to compare them, and that works just as well without that rather CPU-heavy operation.
public static Int32 GetClosestPaletteIndexMatch(Color col, Color[] colorPalette)
{
Int32 colorMatch = 0;
Int32 leastDistance = Int32.MaxValue;
Int32 red = col.R;
Int32 green = col.G;
Int32 blue = col.B;
for (Int32 i = 0; i < colorPalette.Length; i++)
{
Color paletteColor = colorPalette[i];
Int32 redDistance = paletteColor.R - red;
Int32 greenDistance = paletteColor.G - green;
Int32 blueDistance = paletteColor.B - blue;
Int32 distance = (redDistance * redDistance) + (greenDistance * greenDistance) + (blueDistance * blueDistance);
if (distance >= leastDistance)
continue;
colorMatch = i;
leastDistance = distance;
if (distance == 0)
return i;
}
return colorMatch;
}
Now, on a high-coloured image, this palette matching would have to be done for every pixel on the image, but if your input is guaranteed to be paletted already, then you can just do it on the colour palette, reducing your palette lookups to just 256 per image:
Color[] colors = new Color[] {Color.Black, Color.White };
ColorPalette pal = image.Palette;
for(Int32 i = 0; i < pal.Entries.Length; i++)
{
Int32 foundIndex = ColorUtils.GetClosestPaletteIndexMatch(pal.Entries[i], colors);
pal.Entries[i] = colors[foundIndex];
}
image.Palette = pal;
And that's it; all colours on the palette replaced by their closest match.
Note that the Palette property actually makes a new ColorPalette object, and doesn't reference the one in the image, so the code image.Palette.Entries[0] = Color.Blue; would not work, since it'd just modify that unreferenced copy. Because of that, the palette object always has to be taken out, edited and then reassigned to the image.
If you need to save the result to the same filename, there's a trick with a stream you can use, but if you simply need the object to have its palette changed to these two colours, that's really it.
In case you are not sure of the original image format, the process is quite a bit more involved:
As mentioned before in the comments, GetPixel and SetPixel are extremely slow, and it's much more efficient to access the image's underlying bytes. However, unless you are 100% certain what your input type's pixel format is, you can't just go and access these bytes, since you need to know how to read them. A simple workaround for this is to just let the framework do the work for you, by painting your existing image on a new 32 bits per pixel image:
public static Bitmap PaintOn32bpp(Image image, Color? transparencyFillColor)
{
Bitmap bp = new Bitmap(image.Width, image.Height, PixelFormat.Format32bppArgb);
using (Graphics gr = Graphics.FromImage(bp))
{
if (transparencyFillColor.HasValue)
using (System.Drawing.SolidBrush myBrush = new System.Drawing.SolidBrush(Color.FromArgb(255, transparencyFillColor.Value)))
gr.FillRectangle(myBrush, new Rectangle(0, 0, image.Width, image.Height));
gr.DrawImage(image, new Rectangle(0, 0, bp.Width, bp.Height));
}
return bp;
}
Now, you probably want to make sure transparent pixels don't end up as whatever colour happens to be hiding behind an alpha value of 0, so you better specify the transparencyFillColor in this function to give a backdrop to remove any transparency from the source image.
Now we got the high-colour image, the next step is going over the image bytes, converting them to ARGB colours, and matching those to the palette, using the function I gave before. I'd advise making an 8-bit image because they're the easiest to edit as bytes, and the fact they have a colour palette makes it ridiculously easy to replace colours on them after they're created.
Anyway, the bytes. It's probably more efficient for large files to iterate through the bytes in unsafe memory right away, but I generally prefer copying them out. Your choice, of course; if you think it's worth it, you can combine the two functions below to access it directly. Here's a good example for accessing the colour bytes directly.
/// <summary>
/// Gets the raw bytes from an image.
/// </summary>
/// <param name="sourceImage">The image to get the bytes from.</param>
/// <param name="stride">Stride of the retrieved image data.</param>
/// <returns>The raw bytes of the image</returns>
public static Byte[] GetImageData(Bitmap sourceImage, out Int32 stride)
{
BitmapData sourceData = sourceImage.LockBits(new Rectangle(0, 0, sourceImage.Width, sourceImage.Height), ImageLockMode.ReadOnly, sourceImage.PixelFormat);
stride = sourceData.Stride;
Byte[] data = new Byte[stride * sourceImage.Height];
Marshal.Copy(sourceData.Scan0, data, 0, data.Length);
sourceImage.UnlockBits(sourceData);
return data;
}
Now, all you need to do is make an array to represent your 8-bit image, iterate over all bytes per four, and match the colours you get to the ones in your palette. Note that you can never assume that the actual byte length of one line of pixels (the stride) equals the width multiplied by the bytes per pixel. Because of this, while the code does simply add the pixel size to the read offset to get the next pixel on one line, it uses the stride for skipping over whole lines of pixels in the data.
public static Byte[] Convert32BitTo8Bit(Byte[] imageData, Int32 width, Int32 height, Color[] palette, ref Int32 stride)
{
if (stride < width * 4)
throw new ArgumentException("Stride is smaller than one pixel line!", "stride");
Byte[] newImageData = new Byte[width * height];
for (Int32 y = 0; y < height; y++)
{
Int32 inputOffs = y * stride;
Int32 outputOffs = y * width;
for (Int32 x = 0; x < width; x++)
{
// 32bppArgb: Order of the bytes is Alpha, Red, Green, Blue, but
// since this is actually in the full 4-byte value read from the offset,
// and this value is considered little-endian, they are actually in the
// order BGRA. Since we're converting to a palette we ignore the alpha
// one and just give RGB.
Color c = Color.FromArgb(imageData[inputOffs + 2], imageData[inputOffs + 1], imageData[inputOffs]);
// Match to palette index
newImageData[outputOffs] = (Byte)ColorUtils.GetClosestPaletteIndexMatch(c, palette);
inputOffs += 4;
outputOffs++;
}
}
stride = width;
return newImageData;
}
Now we got our 8-bit array. To convert that array to an image you can use the BuildImage function I already posted on another answer.
So finally, using these tools, the conversion code should be something like this:
public static Bitmap ConvertToColors(Bitmap image, Color[] colors)
{
Int32 width = image.Width;
Int32 height = image.Height;
Int32 stride;
Byte[] hiColData;
// use "using" to properly dispose of temporary image object.
using (Bitmap hiColImage = PaintOn32bpp(image, colors[0]))
hiColData = GetImageData(hiColImage, out stride);
Byte[] eightBitData = Convert32BitTo8Bit(hiColData, width, height, colors, ref stride);
return BuildImage(eightBitData, width, height, stride, PixelFormat.Format8bppIndexed, colors, Color.Black);
}
There we go; your image is converted to 8-bit paletted image, for whatever palette you want.
If you want to actually match to black and white and then replace the colours, that's no problem either; just do the conversion with a palette containing only black and white, then take the resulting bitmap's Palette object, replace the colours in it, and assign it back to the image.
Color[] colors = new Color[] {Color.Black, Color.White };
Bitmap newImage = ConvertToColors(image, colors);
ColorPalette pal = newImage.Palette;
pal.Entries[0] = Color.Blue;
pal.Entries[1] = Color.Yellow;
newImage.Palette = pal;

How can I identify the color of the letters in these images?

I am using this article to solve captchas. It works by removing the background from the image using AForge, and then applying Tesseract OCR to the resulting cleaned image.
The problem is, it currently relies on the letters being black, and since each captcha has a different text color, I need to either pass the color to the image cleaner, or change the color of the letters to black. To do either one, I need to know what the existing color of the letters is.
How might I go about identifying the color of the letters?
Using the answer by #Robert Harvey♦ I went and developed the same code using LockBits and unsafe methods to improve it's speed. You must compile with the "Allow unsafe code" flag on. Note that the order of pixels returned from the image is in the bgr not rgb format and I am locking the bitmap using a format of Format24bppRgb to force it to use 3 bytes per colour.
public unsafe Color GetTextColour(Bitmap bitmap)
{
BitmapData bitmapData = bitmap.LockBits(new Rectangle(0, 0, bitmap.Width, bitmap.Height), ImageLockMode.ReadOnly, PixelFormat.Format24bppRgb);
try
{
const int bytesPerPixel = 3;
const int red = 2;
const int green = 1;
int halfHeight = bitmap.Height / 2;
byte* row = (byte*)_bitmapData.Scan0 + (halfHeight * _bitmapData.Stride);
Color startingColour = Color.FromArgb(row[red], row[green], row[0]);
for (int wi = bytesPerPixel, wc = _bitmapData.Width * bytesPerPixel; wi < wc; wi += bytesPerPixel)
{
Color thisColour = Color.FromArgb(row[wi + red], row[wi + green], row[wi]);
if (thisColour != startingColour)
{
return thisColour;
}
}
return Color.Empty; //Or some other default value
}
finally
{
bitmap.UnlockBits(bitmapData);
}
}
The solution to this particular problem turned out to be relatively simple. All I had to do is get the color of the edge pixel halfway down the left side of the image, scan pixels to the right until the color changes, and that's the color of the first letter.
public Color GetTextColor(Bitmap bitmap)
{
var y = bitmap.Height/2;
var startingColor = bitmap.GetPixel(0, y);
for (int x = 1; x < bitmap.Width; x++)
{
var thisColor = bitmap.GetPixel(x, y);
if (thisColor != startingColor)
return thisColor;
}
return null;
}

How to get pixel color of Direct2D bitmap on SharpDX

I use SharpDX and I don't understand how to get pixel color at bitmap. I found CopySubresourceRegion method, but it working on Direct3D.
I've strange idea:
I can create RenderForm and drawing my bitmap on form. Then get graphics of form. Then create bitmap via "new Bitmap(width, height, graphics)". And then get pixel color from new bitmap;
I written special function for getting pixel color. This solved my problem ;)
C# - SharpDX
Color4 GetPixel(Bitmap image, int x, int y, RenderTarget renderTarget) {
var deviceContext2d = renderTarget.QueryInterface<DeviceContext>();
var bitmapProperties = new BitmapProperties1();
bitmapProperties.BitmapOptions = BitmapOptions.CannotDraw | BitmapOptions.CpuRead;
bitmapProperties.PixelFormat = image.PixelFormat;
var bitmap1 = new Bitmap1(deviceContext2d, new Size2((int)image.Size.Width, (int)image.Size.Height), bitmapProperties);
bitmap1.CopyFromBitmap(image);
var map = bitmap1.Map(MapOptions.Read);
var size = (int)image.Size.Width * (int)image.Size.Height * 4;
byte[] bytes = new byte[size];
Marshal.Copy(map.DataPointer, bytes, 0, size);
bitmap1.Unmap();
bitmap1.Dispose();
deviceContext2d.Dispose();
var position = (y * (int)image.Size.Width + x) * 4;
return new Color4(bytes[position], bytes[position + 1], bytes[position + 2], bytes[position + 3]);
}
If you are targeting Direct2D 1.1 (or higher), then you can use the ID2D1Bitmap1::Map method. This will require that you set D2D1_BITMAP_OPTIONS_CPU_READ and D2D1_BITMAP_OPTIONS_CANNOT_DRAW flags on the bitmap when creating it.

DWM API: Incorrect destination position on some computers

I'm using DWM API for displaying thumbnail of other window in my WPF app. On most computers it works fine, but on some computers my thumbnail in app is mispositioned and smaller (it's moved a few pixels left+up and it is about 30% smaller).
For creating a thumbnail relationship I'm using this code(and dwmapi.dll):
if (DwmRegisterThumbnail(IntPtr dest, IntPtr src, out IntPtr thumb) != 0) return;
PSIZE size;
DwmQueryThumbnailSourceSize(m_hThumbnail, out size);
DWM_THUMBNAIL_PROPERTIES props = new DWM_THUMBNAIL_PROPERTIES
{
fVisible = true,
dwFlags = DwmApiConstants.DWM_TNP_VISIBLE | DwmApiConstants.DWM_TNP_RECTDESTINATION | DwmApiConstants.DWM_TNP_OPACITY,
opacity = 0xFF,
rcDestination = destinationRect
};
DwmUpdateThumbnailProperties(m_hThumbnail, ref props);
For positioning in my app I'm using a canvas whose position I'm obtaining using:
var generalTransform = PreviewCanvas.TransformToAncestor(App.Current.MainWindow);
var leftTopPoint = generalTransform.Transform(new Point(0, 0));
return new System.Drawing.Rectangle((int)leftTopPoint.X, (int)leftTopPoint.Y, (int)PreviewCanvas.ActualWidth, (int)PreviewCanvas.ActualHeight);
Thanks to Hans, it was problem with dip -> px conversion (I thought that WPF dimensions are represented by pixels).
So, I changed
return new System.Drawing.Rectangle(
(int)leftTopPoint.X,
(int)leftTopPoint.Y,
(int)PreviewCanvas.ActualWidth,
(int)PreviewCanvas.ActualHeight
);
to:
using (var graphics = System.Drawing.Graphics.FromHwnd(IntPtr.Zero))
{
return new System.Drawing.Rectangle(
(int)(leftTopPoint.X * graphics.DpiX / 96.0),
(int)(leftTopPoint.Y * graphics.DpiY / 96.0),
(int)(PreviewCanvas.ActualWidth * graphics.DpiX / 96.0),
(int)(PreviewCanvas.ActualHeight * graphics.DpiY / 96.0)
);
}
and now positions and sizes of thumbnails are correct on all devices.

Kinect depth detection

I know how to do it in WPF but I have problem for capturing depth in winforms application.
I found some code as below:
private void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame != null)
{
Bitmap DepthBitmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
if (_depthPixels.Length != depthFrame.PixelDataLength)
{
_depthPixels = new DepthImagePixel[depthFrame.PixelDataLength];
_mappedDepthLocations = new ColorImagePoint[depthFrame.PixelDataLength];
}
//Copy the depth frame data onto the bitmap
var _pixelData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(_pixelData);
BitmapData bmapdata = DepthBitmap.LockBits(new Rectangle(0, 0, depthFrame.Width,
depthFrame.Height), ImageLockMode.WriteOnly, DepthBitmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(_pixelData, 0, ptr, depthFrame.Width * depthFrame.Height);
DepthBitmap.UnlockBits(bmapdata);
pictureBox2.Image = DepthBitmap;
}
}
}
but this is not giving me the greyScale depth and it's purple. Any improvement or help?
I found the solution myself, by a function to convert the depth frame:
void Kinect_DepthFrameReady(object sender, DepthImageFrameReadyEventArgs e)
{
using (DepthImageFrame depthFrame = e.OpenDepthImageFrame())
{
if (depthFrame != null)
{
this.depthFrame32 = new byte[depthFrame.Width * depthFrame.Height * 4];
//Update the image to the new format
this.depthPixelData = new short[depthFrame.PixelDataLength];
depthFrame.CopyPixelDataTo(this.depthPixelData);
byte[] convertedDepthBits = this.ConvertDepthFrame(this.depthPixelData, ((KinectSensor)sender).DepthStream);
Bitmap bmap = new Bitmap(depthFrame.Width, depthFrame.Height, PixelFormat.Format32bppRgb);
BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, depthFrame.Width, depthFrame.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(convertedDepthBits, 0, ptr, 4 * depthFrame.PixelDataLength);
bmap.UnlockBits(bmapdata);
pictureBox2.Image = bmap;
}
}
}
private byte[] ConvertDepthFrame(short[] depthFrame, DepthImageStream depthStream)
{
//Run through the depth frame making the correlation between the two arrays
for (int i16 = 0, i32 = 0; i16 < depthFrame.Length && i32 < this.depthFrame32.Length; i16++, i32 += 4)
{
// Console.WriteLine(i16 + "," + i32);
//We don’t care about player’s information here, so we are just going to rule it out by shifting the value.
int realDepth = depthFrame[i16] >> DepthImageFrame.PlayerIndexBitmaskWidth;
//We are left with 13 bits of depth information that we need to convert into an 8 bit number for each pixel.
//There are hundreds of ways to do this. This is just the simplest one.
//Lets create a byte variable called Distance.
//We will assign this variable a number that will come from the conversion of those 13 bits.
byte Distance = 0;
//XBox Kinects (default) are limited between 800mm and 4096mm.
int MinimumDistance = 800;
int MaximumDistance = 4096;
//XBox Kinects (default) are not reliable closer to 800mm, so let’s take those useless measurements out.
//If the distance on this pixel is bigger than 800mm, we will paint it in its equivalent gray
if (realDepth > MinimumDistance)
{
//Convert the realDepth into the 0 to 255 range for our actual distance.
//Use only one of the following Distance assignments
//White = Far
//Black = Close
//Distance = (byte)(((realDepth – MinimumDistance) * 255 / (MaximumDistance-MinimumDistance)));
//White = Close
//Black = Far
Distance = (byte)(255 - ((realDepth - MinimumDistance) * 255 / (MaximumDistance - MinimumDistance)));
//Use the distance to paint each layer (R G & of the current pixel.
//Painting R, G and B with the same color will make it go from black to gray
this.depthFrame32[i32 + RedIndex] = (byte)(Distance);
this.depthFrame32[i32 + GreenIndex] = (byte)(Distance);
this.depthFrame32[i32 + BlueIndex] = (byte)(Distance);
}
//If we are closer than 800mm, the just paint it red so we know this pixel is not giving a good value
else
{
this.depthFrame32[i32 + RedIndex] = 0;
this.depthFrame32[i32 + GreenIndex] = 0;
this.depthFrame32[i32 + BlueIndex] = 0;
}
}
so i presume that rgb frame is working out for you in that case:
first to enable depth camera you need to call:
sensor->NuiInitialize(NUI_INITIALIZE_FLAG_USES_DEPTH|all stuff you use also);
second to start streaming you need to call:
if (int(streams&_Kinect_zed)) ret=sensor->NuiImageStreamOpen(
NUI_IMAGE_TYPE_DEPTH, // Depth camera or rgb camera?
NUI_IMAGE_RESOLUTION_640x480, // Image resolution
NUI_IMAGE_STREAM_FLAG_DISTINCT_OVERFLOW_DEPTH_VALUES, // Image stream flags // NUI_IMAGE_STREAM_FLAG_ENABLE_NEAR_MODE nefunguje !!!
2, // Number of frames to buffer
NULL, // Event handle
&stream_hzed); else stream_hzed=NULL;
beware not all resolution/flags combinations work on all models of kinect !!!
this one above is safe even for the older models like mine
this is how i capture frame (called repeatedly from timer or thread loop)
ret=sensor->NuiImageStreamGetNextFrame(stream_hzed,0,&imageFrame); if (ret>=0)
{
// copy data from frame
imageFrame.pFrameTexture->LockRect(0, &LockedRect, NULL, 0);
if (LockedRect.Pitch!=0)
{
const BYTE* curr = (const BYTE*) LockedRect.pBits;
union _col { BYTE u8[2]; WORD u16; } col;
col.u16=0;
pnt3d p;
long ax,ay;
float mxs=float(xs)/(62.0*deg),mys=float(ys)/(48.6*deg);
for(int x=0,y=0;;)
{
col.u8[0]=*curr; curr++;
col.u8[1]=*curr; curr++;
p.raw=col.u16;
p.rgb=&rgb_default;
if (p.raw==0x0000) p.z=0.0; // p.z je kolma vzdialenost od senzora (kinect to correctuje sam)
else if (p.raw>=0x8000) p.z=4.0;
else p.z=0.8+(float(p.raw-6576)*0.00012115165336374002280501710376283);
// depth FOV correction
p.x=zx[x]*p.z;
p.y=zy[y]*p.z;
// color FOV correction zed 58.5° x 45.6° | rgb 62.0° x 48.6° | 25mm distance
if (p.z>0.0)
{
ax=(((x+10-xs2)*241)>>8)+xs2; // cameras x-offset and different FOV
ay=(((y+30-ys2)*240)>>8)+ys2; // cameras y-offset??? and different FOV
if ((ax>=0)&&(ax<xs))
if ((ay>=0)&&(ay<ys)) p.rgb=&rgb[ay][ax];
}
xyz[y][x]=p;
x++; if (x>=xs) { x=0; y++; if (y>=ys) break; }
}
}
// release frame
imageFrame.pFrameTexture->UnlockRect(0);
ret=sensor->NuiImageStreamReleaseFrame(stream_hzed, &imageFrame);
stream_changed|=_Kinect_zed;
}
Sorry for incomplete source code ...
- all is copy pasted from my kinect class (BDS2006 Turbo C++)
- so you need to check your code if you do not forget something
- and if yes then transform my code to C# (i am not C# user)
- most likely you forget to NUIinitialize with depth flag
- or set invalid resolution/flags/ precision or framerate for your HW
if nothing work at all then you need to initialize the sensor in the first place
int sensors;
INuiSensor *sensor;
if ((NUIGetSensorCount(&sensors)<0)||(sensors<1)) return false;
if (NUICreateSensorByIndex(0,&sensor)<0) return false;
if you link to dll on your own then link only these functions:
typedef HRESULT(__stdcall *_NuiGetSensorCount )(int * pCount); _NuiGetSensorCount NUIGetSensorCount =NULL;
typedef HRESULT(__stdcall *_NuiCreateSensorByIndex)(int index,INuiSensor **ppNuiSensor); _NuiCreateSensorByIndex NUICreateSensorByIndex=NULL;
Every other function (must) is obtained via COM inside SDK headers !!!
if you link and use them on your own then you will not be connected to your physical Kinect !!!
Basically kinect sdk is developed for WPf application. In windows form you have convert the short array of the depth data to the BItmap to display it on picturebox. And based on my expriment WPF is better for programming with kinect.
Below is the function that I used to convert depth frame to Bitmap for showing in picture box.
private Bitmap ImageToBitmap(DepthImageFrame Image)
{
short[] pixeldata = new short[Image.PixelDataLength];
int stride = Image.Width * 2;
Image.CopyPixelDataTo(pixeldata);
Bitmap bmap = new Bitmap(Image.Width, Image.Height, PixelFormat.Format16bppRgb555);
BitmapData bmapdata = bmap.LockBits(new Rectangle(0, 0, Image.Width, Image.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(pixeldata, 0, ptr, Image.PixelDataLength);
bmap.UnlockBits(bmapdata);
return bmap;
}
You may call it like this:
DepthImageFrame VFrame = e.OpenDepthImageFrame();
if (VFrame == null) return;
short[] pixelS = new short[VFrame.PixelDataLength];
Bitmap bmap = ImageToBitmap(VFrame);

Categories

Resources