I am having a problem with EmguCV. I used a demo application, and edited it to my needs.
It involves the following function:
public override Image<Gray, byte> DetectSkin(Image<Bgr, byte> Img, IColor min, IColor max)
{
Image<Hsv, Byte> currentHsvFrame = Img.Convert<Hsv, Byte>();
Image<Gray, byte> skin = new Image<Gray, byte>(Img.Width, Img.Height);
skin = currentHsvFrame.InRange((Hsv)min,(Hsv)max);
return skin;
}
In the demo application, the Image comes from a video. The frame is capured from the video like this:
Image<Bgr, Byte> currentFrame;
grabber = new Emgu.CV.Capture(#".\..\..\..\M2U00253.MPG");
grabber.QueryFrame();
currentFrame = grabber.QueryFrame();
In my application, the Image comes from a microsoft kinect stream.
I use the following function:
private void SensorColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
if (colorFrame != null)
{
// Copy the pixel data from the image to a temporary array
colorFrame.CopyPixelDataTo(this.colorPixels);
// Write the pixel data into our bitmap
this.colorBitmap.WritePixels(
new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight),
this.colorPixels,
this.colorBitmap.PixelWidth * sizeof(int),
0);
Bitmap b = BitmapFromWriteableBitmap(this.colorBitmap);
currentFrame = new Image<Bgr, byte>(b);
currentFrameCopy = currentFrame.Copy();
skinDetector = new YCrCbSkinDetector();
Image<Gray, Byte> skin = skinDetector.DetectSkin(currentFrame, YCrCb_min, YCrCb_max);
}
}
}
private static System.Drawing.Bitmap BitmapFromWriteableBitmap(WriteableBitmap writeBmp)
{
System.Drawing.Bitmap bmp;
using (System.IO.MemoryStream outStream = new System.IO.MemoryStream())
{
BitmapEncoder enc = new BmpBitmapEncoder();
enc.Frames.Add(BitmapFrame.Create((BitmapSource)writeBmp));
enc.Save(outStream);
bmp = new System.Drawing.Bitmap(outStream);
}
return bmp;
}
Now, the demo application works, and mine doesn't. Mine gives the following exception:
And, the image here, contains the following:
I really don't understand this exception. And, now, when I run the demo, working aplication, the image, contains:
Which is, in my eyes, exactly the same. I really don't understand this. Help is very welcome!
To make things easier I've uploaded a working WPF solution for you to the code reference sourceforge page I've been building:
http://sourceforge.net/projects/emguexample/files/Capture/Kinect_SkinDetector_WPF.zip/download
https://sourceforge.net/projects/emguexample/files/Capture/
This was designed and tested using EMGU x64 2.42 so in the Lib folder of the project you will find the referenced dlls. If you are using a different version you will need to delete the current references and replace them with the version you're using.
Secondly the project is design like all projects from the code reference library to be built from the Emgu.CV.Example folder into the ..\EMGU 2.X.X.X\bin.. global bin directory where the opencv compiled libraries are within a folder either x86 or x64.
If you struggle to get the code working I can provide all components but I hate redistributing all the opencv files that you already have so let me know if you want this.
You will need to resize the Mainwindow manually to display both images as I didn't spend to much time playing with layout.
So the code...
In the form initialisation method I check for the kinect sensor and set up the eventhandlers for the frames ready. I have left the original threshold values and skinDetector type although I don't use the EMGU version I just forgot to remove it. You will need to play with the threshold values and so on.
//// Look through all sensors and start the first connected one.
//// This requires that a Kinect is connected at the time of app startup.
//// To make your app robust against plug/unplug,
//// it is recommended to use KinectSensorChooser provided in Microsoft.Kinect.Toolkit (See components in Toolkit Browser).
foreach (var potentialSensor in KinectSensor.KinectSensors)
{
if (potentialSensor.Status == KinectStatus.Connected)
{
this.KS = potentialSensor;
break;
}
}
//If we have a Kinect Sensor we will set it up
if (null != KS)
{
// Turn on the color stream to receive color frames
KS.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
//Turn on the depth stream to recieve depth frames
KS.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
//Start the Streaming process
KS.Start();
//Create a link to a callback to deal with the frames
KS.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(KS_AllFramesReady);
//We set up a thread to process the image/disparty map from the kinect
//Why? The kinect AllFramesReady has a timeout if it has not finished the streams will simply stop
KinectBuffer = new Thread(ProcessBuffer);
hsv_min = new Hsv(0, 45, 0);
hsv_max = new Hsv(20, 255, 255);
YCrCb_min = new Ycc(0, 131, 80);
YCrCb_max = new Ycc(255, 185, 135);
detector = new AdaptiveSkinDetector(1, AdaptiveSkinDetector.MorphingMethod.NONE);
skinDetector = new YCrCbSkinDetector();
}
I always play with the kinect data in a new thread for speed but you may want to advanced this to a Background worker if you plan to do any more heavy processing so it is better managed.
The thread calls the ProcessBuffer() method you can ignore all the commented code as this is the remanence of the code used to display the depth image. Again I'm using the Marshall copy method to keep things fast but the thing to look for is the Dispatcher.BeginInvoke in WPF that allows the images to be displayed from the Kinect thread. This is required as I'm not processing on the main thread.
//This takes the byte[] array from the kinect and makes a bitmap from the colour data for us
byte[] pixeldata = new byte[CF.PixelDataLength];
CF.CopyPixelDataTo(pixeldata);
System.Drawing.Bitmap bmap = new System.Drawing.Bitmap(CF.Width, CF.Height, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
BitmapData bmapdata = bmap.LockBits(new System.Drawing.Rectangle(0, 0, CF.Width, CF.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(pixeldata, 0, ptr, CF.PixelDataLength);
bmap.UnlockBits(bmapdata);
//display our colour frame
currentFrame = new Image<Bgr, Byte>(bmap);
Image<Gray, Byte> skin2 = skinDetector.DetectSkin(currentFrame, YCrCb_min, YCrCb_max);
ExtractContourAndHull(skin2);
DrawAndComputeFingersNum();
//Display our images using WPF Dispatcher Invoke as this is a sub thread.
Dispatcher.BeginInvoke((Action)(() =>
{
ColorImage.Source = BitmapSourceConvert.ToBitmapSource(currentFrame);
}), System.Windows.Threading.DispatcherPriority.Render, null);
Dispatcher.BeginInvoke((Action)(() =>
{
SkinImage.Source = BitmapSourceConvert.ToBitmapSource(skin2);
}), System.Windows.Threading.DispatcherPriority.Render, null);
I hope this helps I will at some point neaten up the code I uploaded,
Cheers
Related
Use _capture.SetCaptureProperty(CapProp.Exposure, x)no error, but can't function to my camera.Any idea for this to set exposure at Emgucv.
Camera model : Basler (acA1300-30gm)
No Change after i run the code.
_capture.SetCaptureProperty(CapProp.XiExposure, 30000.0);
_capture.SetCaptureProperty(CapProp.Exposure, 30000.0);
_capture.SetCaptureProperty(CapProp.XiExposureBurstCount, 30000.0);
Camera Property
I've also found with Basler cameras the capture properties can't be set, the way I coded my application instead was using the free Pylon runtime SDK to capture the frames to pass to Emgu CV. The following snippets from that project should get you started:
using Basler.Pylon; // You'll need to add a reference to Basler.Pylon.DLL as well
var cam = new Camera();
cam.Open();
Console.WriteLine("Using camera {0}.", cam.CameraInfo[CameraInfoKey.SerialNumber]);
cam.Parameters[PLCamera.ExposureTimeAbs].SetValue(1000, FloatValueCorrection.ClipToRange);
cam.StreamGrabber.ImageGrabbed += OnImageGrabbed1;
cam.StreamGrabber.Start(GrabStrategy.OneByOne, GrabLoop.ProvidedByStreamGrabber);
private void OnImageGrabbed1(object sender, ImageGrabbedEventArgs e)
{
IGrabResult grabResult = e.GrabResult;
if (grabResult.GrabSucceeded)
{
using (Bitmap bm = new Bitmap(grabResult.Width, grabResult.Height, PixelFormat.Format32bppRgb))
{
BitmapData bmpData = bm.LockBits(new Rectangle(0, 0, bm.Width, bm.Height), ImageLockMode.ReadWrite, bm.PixelFormat);
converter.OutputPixelFormat = PixelType.BGRA8packed;
IntPtr ptrBmp = bmpData.Scan0;
converter.Convert(ptrBmp, bmpData.Stride * bm.Height, grabResult);
bm.UnlockBits(bmpData);
using (Image<Bgr, byte> imageCV = new Image<Bgr, byte>(bm))
{
// Example Emgu CV function
double mean = CvInvoke.Mean(imageCV.Mat).V0;
}
}
}
else
{
LogMessage($"Camera 1 error: {grabResult.ErrorCode} {grabResult.ErrorDescription}");
}
}
I've been using this code to process real-time video at 40fps from one of their Gigabit cameras and the performance seems good.
I would like to implement hardware acceleration for a C # WinForms application. Reason is that i have to draw 150 x 720p images and the 5 picturebox control's takes too long (scaling+drawing of images) and therefore are problems with disposing and reloading.
So I dealt with ShapeDX.
But now I 'm stuck and do not know how to draw the 2D Texture. To test the code i just have a Test Button and one PictureBox.
When I run the code in the PictureBox also DirectX ( Draw or 3D ) is loaded. I acknowledge the black background. But I do not understand how the Texture must be drawn.
String imageFile = "Image.JPG";
Control TargetControl = this.pictureBoxCurrentFrameL;
int TotalWidth = TargetControl.Width;
int TotalHeight = TargetControl.Height;
SharpDX.Direct3D11.Device defaultDevice = new SharpDX.Direct3D11.Device(SharpDX.Direct3D.DriverType.Hardware, SharpDX.Direct3D11.DeviceCreationFlags.Debug);
SharpDX.Toolkit.Graphics.GraphicsDevice graphicsDevice = SharpDX.Toolkit.Graphics.GraphicsDevice.New(defaultDevice);
SharpDX.Toolkit.Graphics.PresentationParameters presentationParameters = new SharpDX.Toolkit.Graphics.PresentationParameters();
presentationParameters.DeviceWindowHandle = this.pictureBoxCurrentFrameL.Handle;
presentationParameters.BackBufferWidth = TotalWidth;
presentationParameters.BackBufferHeight = TotalHeight;
SharpDX.Toolkit.Graphics.SwapChainGraphicsPresenter swapChainGraphicsPresenter = new SharpDX.Toolkit.Graphics.SwapChainGraphicsPresenter(graphicsDevice, presentationParameters);
SharpDX.Toolkit.Graphics.Texture2D texture2D = SharpDX.Toolkit.Graphics.Texture2D.Load(graphicsDevice, imageFile);
//Now i should draw. But how?
swapChainGraphicsPresenter.Present();/**/
Using Microsoft Visual Studio Community 2015 (.Net 4, C# WinForm) on Windows 10 an SharpDX-SDK-2.6.3!
Thank you for assistance.
I solved the problem by simply switching to SlimDX (SlimDX Runtime .NET 4.0 x64 January 2012.msi, .Net4, Win10, MS Visual Studio Community 2015, Winforms App.). There are several useful tutorials .
to use SlimDX just link the only one DLL to ur project! after installing SlimDX u will find this SlimDX.dll file somewhere on ur C Drive.
it is important to understand that you need at least one factory and an render target for Direct2D . The RenderTarget points to the object to be used ( Control / form / etc ) and takes over the drawing.
a swap chain is not needed. probable used by the render target internally. the biggest part is to convert an bitmap into usefull Direct2D Bitmap (for drawing). Otherwise, you can process the bitmap data also from a MemoryStream .
For Those Who are looking for a solution too:
Control targetControl = this.pictureBoxCurrentFrameL;
String imageFile = "Image.JPG";
//Update control styles, works for forms, not for controls. I solve this later otherwise .
//this.SetStyle(ControlStyles.AllPaintingInWmPaint, true);
//this.SetStyle(ControlStyles.Opaque, true);
//this.SetStyle(ControlStyles.ResizeRedraw, true);
//Get requested debug level
SlimDX.Direct2D.DebugLevel debugLevel = SlimDX.Direct2D.DebugLevel.None;
//Resources for Direct2D rendering
SlimDX.Direct2D.Factory d2dFactory = new SlimDX.Direct2D.Factory(SlimDX.Direct2D.FactoryType.Multithreaded, debugLevel);
//Create the render target
SlimDX.Direct2D.WindowRenderTarget d2dWindowRenderTarget = new SlimDX.Direct2D.WindowRenderTarget(d2dFactory, new SlimDX.Direct2D.WindowRenderTargetProperties() {
Handle = targetControl.Handle,
PixelSize = targetControl.Size,
PresentOptions = SlimDX.Direct2D.PresentOptions.Immediately
});
//Paint!
d2dWindowRenderTarget.BeginDraw();
d2dWindowRenderTarget.Clear(new SlimDX.Color4(Color.LightSteelBlue));
//Convert System.Drawing.Bitmap into SlimDX.Direct2D.Bitmap!
System.Drawing.Bitmap bitmap = (System.Drawing.Bitmap)Properties.Resources.Image_720p;//loaded from embedded resource, can be changed to Bitmap.FromFile(imageFile); to load from hdd!
SlimDX.Direct2D.Bitmap d2dBitmap = null;
System.Drawing.Imaging.BitmapData bitmapData = bitmap.LockBits(new Rectangle(new Point(0, 0), bitmap.Size), System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppPArgb);//TODO: PixelFormat is very important!!! Check!
SlimDX.DataStream dataStream = new SlimDX.DataStream(bitmapData.Scan0, bitmapData.Stride * bitmapData.Height, true, false);
SlimDX.Direct2D.PixelFormat d2dPixelFormat = new SlimDX.Direct2D.PixelFormat(SlimDX.DXGI.Format.B8G8R8A8_UNorm, SlimDX.Direct2D.AlphaMode.Premultiplied);
SlimDX.Direct2D.BitmapProperties d2dBitmapProperties = new SlimDX.Direct2D.BitmapProperties();
d2dBitmapProperties.PixelFormat = d2dPixelFormat;
d2dBitmap = new SlimDX.Direct2D.Bitmap(d2dWindowRenderTarget, new Size(bitmap.Width, bitmap.Height), dataStream, bitmapData.Stride, d2dBitmapProperties);
bitmap.UnlockBits(bitmapData);
//Draw SlimDX.Direct2D.Bitmap
d2dWindowRenderTarget.DrawBitmap(d2dBitmap, new Rectangle(0, 0, bitmap.Width, bitmap.Height));/**/
d2dWindowRenderTarget.EndDraw();
//Dispose everything u dont need anymore.
//bitmap.Dispose();//......
So it is super simple to use Direct2D, all the code can be compressed to 2 main Lines + drawing:
SlimDX.Direct2D.Factory d2dFactory = new SlimDX.Direct2D.Factory(SlimDX.Direct2D.FactoryType.Multithreaded, SlimDX.Direct2D.DebugLevel.None);
SlimDX.Direct2D.WindowRenderTarget d2dWindowRenderTarget = new SlimDX.Direct2D.WindowRenderTarget(d2dFactory, new SlimDX.Direct2D.WindowRenderTargetProperties() { Handle = targetControl.Handle, PixelSize = targetControl.Size, PresentOptions = SlimDX.Direct2D.PresentOptions.Immediately });
d2dWindowRenderTarget.BeginDraw();
d2dWindowRenderTarget.Clear(new SlimDX.Color4(Color.LightSteelBlue));
d2dWindowRenderTarget.DrawRectangle(new SlimDX.Direct2D.SolidColorBrush(d2dWindowRenderTarget, new SlimDX.Color4(Color.Red)), new Rectangle(20,20, targetControl.Width-40, targetControl.Height-40));
d2dWindowRenderTarget.EndDraw();
Hello Dear Forum Members !
I am working on a project to detect change view from security camera. I mean, when someone try to move camera (some kind of sabotage...) I have to notice this. My idea is:
capture images from camera every 10 sec and compare this two pictures ( Old and actual picture).
There are almost 70 cameras which I need to control, so I can't use live streaming because it could occupy my internet connection. I use Emgu CV library to make this task, but during my work I stuck with some problem.. Here im piece of my code what I prepared:
public class EmguCV
{
static public Model Test(string BaseImagePath, string ActualImagePath)
{
double noise = 0;
Mat curr64f = new Mat();
Mat prev64f = new Mat();
Mat hann = new Mat();
Mat src1 = CvInvoke.Imread(BaseImagePath, 0);
Mat src2 = CvInvoke.Imread(ActualImagePath, 0);
Size size = new Size(50, 50);
src1.ConvertTo(prev64f, Emgu.CV.CvEnum.DepthType.Cv64F);
src2.ConvertTo(curr64f, Emgu.CV.CvEnum.DepthType.Cv64F);
CvInvoke.CreateHanningWindow(hann, src1.Size, Emgu.CV.CvEnum.DepthType.Cv64F);
MCvPoint2D64f shift = CvInvoke.PhaseCorrelate(curr64f, prev64f, hann, out noise );
double value = noise ;
double radius = Math.Sqrt(shift.X * shift.X + shift.Y * shift.Y);
Model Test = new Model() { osX = shift.X, osY = shift.Y, noise = value };
return Test;
}
}
Therefore, I have two questions:
How to convert Bitmap to Mat structure.
At the moment I read my images to compare from disc according to file path. But I would like to send to compare collection of bitmaps without saving on my hard drive.
Do you know any another way to detect shift between two pictures ?. I would be really grateful for any other suggestion in this area.
Regards,
Mariusz
I know its very late to answer this but today I was looking for this problem in the internet and I found something like this:
Bitmap bitmap; //This is your bitmap
Image<Bgr, byte> imageCV = new Image<Bgr, byte>(bitmap); //Image Class from Emgu.CV
Mat mat = imageCV.Mat; //This is your Image converted to Mat
Is there a way to resize an image using GPU (graphic card) that is consumable through a .NET application?
I am looking for an extremely performant way to resize images and have heard that the GPU could do it much quicker than CPU (GDI+ using C#). Are there known implementations or sample code using the GPU to resize images that I could consume in .NET?
Have you thought about using XNA to resize your images? Here you can find out how to use XNA to save image as a png/jpeg to a MemoryStream and later reuse it a Bitmap object:
EDIT: I will post an example here (taken from the link above) on how you can possibly use XNA.
public static Image Texture2Image(Texture2D texture)
{
Image img;
using (MemoryStream MS = new MemoryStream())
{
texture.SaveAsPng(MS, texture.Width, texture.Height);
//Go To the beginning of the stream.
MS.Seek(0, SeekOrigin.Begin);
//Create the image based on the stream.
img = Bitmap.FromStream(MS);
}
return img;
}
I also found out today that you can OpenCV to use GPU/multicore CPUs. You can for example choose to use a .NET wrapper such as Emgu and and use its Image class to manipulate with your picture and return a .NET Bitmap class:
public static Bitmap ResizeBitmap(Bitmap sourceBM, int width, int height)
{
// Initialize Emgu Image object
Image<Bgr, Byte> img = new Image<Bgr, Byte>(sourceBM);
// Resize using liniear interpolation
img.Resize(width, height, INTER.CV_INTER_LINEAR);
// Return .NET Bitmap object
return img.ToBitmap();
}
I wrote a quick spike to check performance using WPF, though I cannot for sure say that its using the GPU.
Still, see below. This scales an image to 33.5 (or whatever) times its original size.
public void Resize()
{
double scaleFactor = 33.5;
var originalFileStream = System.IO.File.OpenRead(#"D:\SkyDrive\Pictures\Random\Misc\DoIt.jpg");
var originalBitmapDecoder = JpegBitmapDecoder.Create(originalFileStream, BitmapCreateOptions.None, BitmapCacheOption.OnLoad);
BitmapFrame originalBitmapFrame = originalBitmapDecoder.Frames.First();
var originalPixelFormat = originalBitmapFrame.Format;
TransformedBitmap transformedBitmap =
new TransformedBitmap(originalBitmapFrame, new System.Windows.Media.ScaleTransform()
{
ScaleX = scaleFactor,
ScaleY = scaleFactor
});
int stride = ((transformedBitmap.PixelWidth * transformedBitmap.Format.BitsPerPixel) + 7) / 8;
int pixelCount = (stride * (transformedBitmap.PixelHeight - 1)) + stride;
byte[] buffer = new byte[pixelCount];
transformedBitmap.CopyPixels(buffer, stride, 0);
WriteableBitmap transformedWriteableBitmap = new WriteableBitmap(transformedBitmap.PixelWidth, transformedBitmap.PixelHeight, transformedBitmap.DpiX, transformedBitmap.DpiY, transformedBitmap.Format, transformedBitmap.Palette);
transformedWriteableBitmap.WritePixels(new Int32Rect(0, 0, transformedBitmap.PixelWidth, transformedBitmap.PixelHeight), buffer, stride, 0);
BitmapFrame transformedFrame = BitmapFrame.Create(transformedWriteableBitmap);
var jpegEncoder = new JpegBitmapEncoder();
jpegEncoder.Frames.Add(transformedFrame);
using (var outputFileStream = System.IO.File.OpenWrite(#"C:\DATA\Scrap\WPF.jpg"))
{
jpegEncoder.Save(outputFileStream);
}
}
The image I was testing was 495 x 360. It resized it to over 16k x 12k in a couple of seconds, including save out.
It resizes to 1.5x around 165 times a second in a single-core run. On an i7 and the GPU seemingly doing nothing, CPU at 20% I'd expect to get 5x more when multithreaded.
Performance profiling shows a hot path to wpfgfx_v0400.dll which is the native WPF graphics library and is close to DirectX (look-up 'milcore' in Google).
So it might be accelerated, I don't know.
Luke
Yes, it is possible to use GPU to resize your images. This can be done using DirectX Surfaces (for example using SlimDx in C#). You should create a surface and move your image to it, and then you can stretch this surface to another target surface of your desired size using only GPU, and finally get back the resized image from the target surface. In these scenario, pixel format of the surfaces can be different and the GPU automatically handles it. But here there are things that can affect the performance of this operation. Moving data between GPU and CPU is a time consuming process. You can apply some techniques to boost performance based on your situation, and avoiding extra data transfer between CPU and GPU memory.
Alright, so basically I am trying to use openCV with the Kinect (Microsoft's new Kinect 1.0 SDK). I am very new to both C# and Kinect. But what I want to do is use the kinect for facial recognition using EMGU (openCV wrapper for C#). So far I have successfully captured the video stream from the kinect, converted it into an EMGU Image<>, then converted it a Byte[] array so that I can use the BitmapSource to display my image on the screen.
While that works fine, problems seem to arise when I try to actually do some image processing with the Image<> class. It actually seems to be processing fine, but it is not very fast. This wouldn't necessarily be a problem for me, but now the BitmapSource isn't being displayed at all.
Here is an example of my code to detect faces:
img = new Image<Bgr, byte>(clone);
haar = new HaarCascade("directory");
Image<Gray, Byte> gray;
using (HaarCascade face = new HaarCascade("blablabla.xml"))
using (HaarCascade eye = new HaarCascade("blarg.xml"))
{
using ( gray = img.Convert<Gray, Byte>()) //Convert it to Grayscale
{
MCvAvgComp[] facesDetected = face.Detect(gray, 1.1, 1, mgu.CV.CvEnum.HAAR_DETECTION_TYPE.DO_CANNY_PRUNING, new System.Drawing.Size(img.Width / 8, img.Height / 8));
foreach (MCvAvgComp f in facesDetected)
{
img.Draw(f.rect, new Bgr(System.Drawing.Color.Blue), 2);
imgDoneProc = img.ToBitmap();
}
}
}
Then I use the BitmapSource.Create() :
BitmapSource bmapa = BitmapSource.Create(PImage.Width, PImage.Height, 96, 96, PixelFormats.Bgr32, null, bmpBytes, PImage.Width * PImage.BytesPerPixel);
image1.Source = bmapa;
(PImage is the stream from the Kinect; bmpBytes is a Byte[] converted from the Image<>)
So, if I comment out the code that does the image processing, all of the converting back and forth works fine. When I add the image proc code, I can write to the console some useful data, but the image is not displayed. I have also noticed that the 'bmapa' is not updated quickly. That is the only noticeable difference other than nothing being displayed in image1.
So, am I using BitmapSource incorrectly, or is there are way speed up my code or perhaps slow BitmapSource's "refresh rate"? Because when I am just converting between data structures, I get a steady stream from the kinect and all works fine.
Thanks,
Brent