EmguCV Tracker implementation in C# - c#

Looking for example code to understand how to implement EmguCV tracker. I tried few things like in this poorly written code:
class ObjectTracker
{
Emgu.CV.Tracking.TrackerCSRT tracker = new Emgu.CV.Tracking.TrackerCSRT();
bool isActive = false;
public bool trackerActive = false;
public void Track(Bitmap pic, Rectangle selection,out Rectangle bound)
{
Rectangle result = new Rectangle();
Bitmap bitmap=pic; //This is your bitmap
Emgu.CV.Image<Bgr, Byte> imageCV = new Emgu.CV.Image<Bgr, byte>(bitmap); //Image Class from Emgu.CV
Emgu.CV.Mat mat = imageCV.Mat; //This is your Image converted to Mat
if (tracker.Init(mat,selection))
{
while (tracker.Update(mat, out bound))
{
result = bound;
}
}
bound = result;
}
I'm aware of there is few logic flaws, but still I couldn't menage to get any result in different attempts.
Thanks!

Turns out, it is really simple.
First define a Tracker in a type you desire.
Example:
Emgu.CV.Tracking.TrackerCSRT Tracker= new Emgu.CV.Tracking.TrackerCSRT();
Then before scaning the selected area(RoI) you need to initialize tracker
Example:
Tracker.Initimage, roi);
Important Note: image must be in type of Emgu.CV.Mat , roi Drawing.Rectangle.
To convert your Bitmap to Mat you can use following method:
Example:
public Emgu.CV.Mat toMat(Bitmap pic)
{
Bitmap bitmap=pic;
Emgu.CV.Image<Bgr, Byte> imageCV = new Emgu.CV.Image<Bgr, byte>(bitmap);
Emgu.CV.Mat mat = imageCV.Mat;
return mat;
}
Note: This code belongs to someone else from stackoverflow which I forget the source. Thanks to him/her.
Finally using following statement will returns Rectangle that envelopes tracked object. Trick is object doesn't return Rectangle as method output, it returns it via out statement.
tracker.Update(image, out roi);
Last Note: If anyone knows how to improve performance or multithread this method, please leave a comment.
Have a nice day!

Related

OpenCV C# Frame Differencing

I'm new to computer vision and currently playing around with static frame differencing to try and determine whether there is motion in video.
My variables:
public Mat currentFrame = new Mat();
public Mat prevFrame = new Mat();
public Mat result = new Mat();
bool motion = false;
Simple differencing function (being called every frame):
public Mat getDifference(Mat videoFrame)
{
currentFrame = videoFrame.Clone();
Cv2.Absdiff(currentFrame, prevFrame, result);
prevFrame = currentFrame.Clone();
return result;
}
When motion exists the result matrix looks like this:
When motion doesn't exist the result matrix looks like this (empty):
My original idea was that if the result matrix is effectively empty (all black), then I could say motion = false. However, this is proving to be more difficult that anticipated since it is technically never empty, so I can't say:
if(!result.Empty())
{
motion = true;
}
Without the need of for loops and pixel by pixel analysis, is there a simple/clean 'if' statement I can use that simply says (if the matrix contains anything that isn't black pixels, motion = true). Or... is this too simplistic? I'm open to hearing better ways of doing this, I had a look around on the web but there aren't many solid examples for C#. My video is playing within a WPF application in real-time so nested for loops are to be avoided.
Thanks for your time!
You could for example convert the matrix to an image. That should give you access to all the image manipulation functions. For example ThresholdBinary to make pixels either zero or a given value, and CountNonZero. That should give you some tools balance how much things need to change, and how large area need to change.
Found a simple way to do it, may not be the best but it does work.
public bool motion = false;
public Mat currentFrame = new Mat();
public Mat prevFrame = new Mat();
public Mat absDiffImage = new Mat();
public Mat grayImage = new Mat();
public Point[][] frameContours;
public HierarchyIndex[] external;
public Mat frameThresh = new Mat();
Cv2.CvtColor(currentFrame, currentFrame, ColorConversionCodes.BGRA2GRAY);
Cv2.Absdiff(currentFrame, prevFrame, absDiffImage);
Cv2.Threshold(absDiffImage, frameThresh, 80, 255, ThresholdTypes.Binary);
Cv2.FindContours(frameThresh, out frameContours, out external, RetrievalModes.List, ContourApproximationModes.ApproxSimple);
if (frameContours.Length > 20)
{
motion = true;
}
else
{
motion = false;
}

How to convert Bitmap to Mat structur in EmguCV & How to detect two images shift

Hello Dear Forum Members !
I am working on a project to detect change view from security camera. I mean, when someone try to move camera (some kind of sabotage...) I have to notice this. My idea is:
capture images from camera every 10 sec and compare this two pictures ( Old and actual picture).
There are almost 70 cameras which I need to control, so I can't use live streaming because it could occupy my internet connection. I use Emgu CV library to make this task, but during my work I stuck with some problem.. Here im piece of my code what I prepared:
public class EmguCV
{
static public Model Test(string BaseImagePath, string ActualImagePath)
{
double noise = 0;
Mat curr64f = new Mat();
Mat prev64f = new Mat();
Mat hann = new Mat();
Mat src1 = CvInvoke.Imread(BaseImagePath, 0);
Mat src2 = CvInvoke.Imread(ActualImagePath, 0);
Size size = new Size(50, 50);
src1.ConvertTo(prev64f, Emgu.CV.CvEnum.DepthType.Cv64F);
src2.ConvertTo(curr64f, Emgu.CV.CvEnum.DepthType.Cv64F);
CvInvoke.CreateHanningWindow(hann, src1.Size, Emgu.CV.CvEnum.DepthType.Cv64F);
MCvPoint2D64f shift = CvInvoke.PhaseCorrelate(curr64f, prev64f, hann, out noise );
double value = noise ;
double radius = Math.Sqrt(shift.X * shift.X + shift.Y * shift.Y);
Model Test = new Model() { osX = shift.X, osY = shift.Y, noise = value };
return Test;
}
}
Therefore, I have two questions:
How to convert Bitmap to Mat structure.
At the moment I read my images to compare from disc according to file path. But I would like to send to compare collection of bitmaps without saving on my hard drive.
Do you know any another way to detect shift between two pictures ?. I would be really grateful for any other suggestion in this area.
Regards,
Mariusz
I know its very late to answer this but today I was looking for this problem in the internet and I found something like this:
Bitmap bitmap; //This is your bitmap
Image<Bgr, byte> imageCV = new Image<Bgr, byte>(bitmap); //Image Class from Emgu.CV
Mat mat = imageCV.Mat; //This is your Image converted to Mat

Emgucv Convert<Hsv, Byte>() image

I am having a problem with EmguCV. I used a demo application, and edited it to my needs.
It involves the following function:
public override Image<Gray, byte> DetectSkin(Image<Bgr, byte> Img, IColor min, IColor max)
{
Image<Hsv, Byte> currentHsvFrame = Img.Convert<Hsv, Byte>();
Image<Gray, byte> skin = new Image<Gray, byte>(Img.Width, Img.Height);
skin = currentHsvFrame.InRange((Hsv)min,(Hsv)max);
return skin;
}
In the demo application, the Image comes from a video. The frame is capured from the video like this:
Image<Bgr, Byte> currentFrame;
grabber = new Emgu.CV.Capture(#".\..\..\..\M2U00253.MPG");
grabber.QueryFrame();
currentFrame = grabber.QueryFrame();
In my application, the Image comes from a microsoft kinect stream.
I use the following function:
private void SensorColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
if (colorFrame != null)
{
// Copy the pixel data from the image to a temporary array
colorFrame.CopyPixelDataTo(this.colorPixels);
// Write the pixel data into our bitmap
this.colorBitmap.WritePixels(
new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight),
this.colorPixels,
this.colorBitmap.PixelWidth * sizeof(int),
0);
Bitmap b = BitmapFromWriteableBitmap(this.colorBitmap);
currentFrame = new Image<Bgr, byte>(b);
currentFrameCopy = currentFrame.Copy();
skinDetector = new YCrCbSkinDetector();
Image<Gray, Byte> skin = skinDetector.DetectSkin(currentFrame, YCrCb_min, YCrCb_max);
}
}
}
private static System.Drawing.Bitmap BitmapFromWriteableBitmap(WriteableBitmap writeBmp)
{
System.Drawing.Bitmap bmp;
using (System.IO.MemoryStream outStream = new System.IO.MemoryStream())
{
BitmapEncoder enc = new BmpBitmapEncoder();
enc.Frames.Add(BitmapFrame.Create((BitmapSource)writeBmp));
enc.Save(outStream);
bmp = new System.Drawing.Bitmap(outStream);
}
return bmp;
}
Now, the demo application works, and mine doesn't. Mine gives the following exception:
And, the image here, contains the following:
I really don't understand this exception. And, now, when I run the demo, working aplication, the image, contains:
Which is, in my eyes, exactly the same. I really don't understand this. Help is very welcome!
To make things easier I've uploaded a working WPF solution for you to the code reference sourceforge page I've been building:
http://sourceforge.net/projects/emguexample/files/Capture/Kinect_SkinDetector_WPF.zip/download
https://sourceforge.net/projects/emguexample/files/Capture/
This was designed and tested using EMGU x64 2.42 so in the Lib folder of the project you will find the referenced dlls. If you are using a different version you will need to delete the current references and replace them with the version you're using.
Secondly the project is design like all projects from the code reference library to be built from the Emgu.CV.Example folder into the ..\EMGU 2.X.X.X\bin.. global bin directory where the opencv compiled libraries are within a folder either x86 or x64.
If you struggle to get the code working I can provide all components but I hate redistributing all the opencv files that you already have so let me know if you want this.
You will need to resize the Mainwindow manually to display both images as I didn't spend to much time playing with layout.
So the code...
In the form initialisation method I check for the kinect sensor and set up the eventhandlers for the frames ready. I have left the original threshold values and skinDetector type although I don't use the EMGU version I just forgot to remove it. You will need to play with the threshold values and so on.
//// Look through all sensors and start the first connected one.
//// This requires that a Kinect is connected at the time of app startup.
//// To make your app robust against plug/unplug,
//// it is recommended to use KinectSensorChooser provided in Microsoft.Kinect.Toolkit (See components in Toolkit Browser).
foreach (var potentialSensor in KinectSensor.KinectSensors)
{
if (potentialSensor.Status == KinectStatus.Connected)
{
this.KS = potentialSensor;
break;
}
}
//If we have a Kinect Sensor we will set it up
if (null != KS)
{
// Turn on the color stream to receive color frames
KS.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
//Turn on the depth stream to recieve depth frames
KS.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
//Start the Streaming process
KS.Start();
//Create a link to a callback to deal with the frames
KS.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(KS_AllFramesReady);
//We set up a thread to process the image/disparty map from the kinect
//Why? The kinect AllFramesReady has a timeout if it has not finished the streams will simply stop
KinectBuffer = new Thread(ProcessBuffer);
hsv_min = new Hsv(0, 45, 0);
hsv_max = new Hsv(20, 255, 255);
YCrCb_min = new Ycc(0, 131, 80);
YCrCb_max = new Ycc(255, 185, 135);
detector = new AdaptiveSkinDetector(1, AdaptiveSkinDetector.MorphingMethod.NONE);
skinDetector = new YCrCbSkinDetector();
}
I always play with the kinect data in a new thread for speed but you may want to advanced this to a Background worker if you plan to do any more heavy processing so it is better managed.
The thread calls the ProcessBuffer() method you can ignore all the commented code as this is the remanence of the code used to display the depth image. Again I'm using the Marshall copy method to keep things fast but the thing to look for is the Dispatcher.BeginInvoke in WPF that allows the images to be displayed from the Kinect thread. This is required as I'm not processing on the main thread.
//This takes the byte[] array from the kinect and makes a bitmap from the colour data for us
byte[] pixeldata = new byte[CF.PixelDataLength];
CF.CopyPixelDataTo(pixeldata);
System.Drawing.Bitmap bmap = new System.Drawing.Bitmap(CF.Width, CF.Height, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
BitmapData bmapdata = bmap.LockBits(new System.Drawing.Rectangle(0, 0, CF.Width, CF.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(pixeldata, 0, ptr, CF.PixelDataLength);
bmap.UnlockBits(bmapdata);
//display our colour frame
currentFrame = new Image<Bgr, Byte>(bmap);
Image<Gray, Byte> skin2 = skinDetector.DetectSkin(currentFrame, YCrCb_min, YCrCb_max);
ExtractContourAndHull(skin2);
DrawAndComputeFingersNum();
//Display our images using WPF Dispatcher Invoke as this is a sub thread.
Dispatcher.BeginInvoke((Action)(() =>
{
ColorImage.Source = BitmapSourceConvert.ToBitmapSource(currentFrame);
}), System.Windows.Threading.DispatcherPriority.Render, null);
Dispatcher.BeginInvoke((Action)(() =>
{
SkinImage.Source = BitmapSourceConvert.ToBitmapSource(skin2);
}), System.Windows.Threading.DispatcherPriority.Render, null);
I hope this helps I will at some point neaten up the code I uploaded,
Cheers

How can you copy part of a writeablebitmap to another writeablebitmap?

How would you copy a part from one WriteableBitmap to another WriteableBitmap? I've written and used dozens of 'copypixel' and transparent copies in the past, but I can't seem to find the equivalent for WPF C#.
This is either the most difficult question in the world or the easiest because absolutely nobody is touching it with a ten foot pole.
Use WriteableBitmapEx from http://writeablebitmapex.codeplex.com/
Then use the Blit method as below.
private WriteableBitmap bSave;
private WriteableBitmap bBase;
private void test()
{
bSave = BitmapFactory.New(200, 200); //your destination
bBase = BitmapFactory.New(200, 200); //your source
//here paint something on either bitmap.
Rect rec = new Rect(0, 0, 199, 199);
using (bSave.GetBitmapContext())
{
using (bBase.GetBitmapContext())
{
bSave.Blit(rec, bBase, rec, WriteableBitmapExtensions.BlendMode.Additive);
}
}
}
you can use BlendMode.None for higher performance if you don't need to preserve any information in your destination. When using Additive you get alpha compositing between source and destination.
There does not appear to be a way to copy directly from one to another but you can do it in two steps using an array and CopyPixels to get them out of one and then WritePixels to get them into another.
I agree with Guy above that the easiest method is to simply use the WriteableBitmapEx library; however, the Blit function is for compositing a foreground and background image. The most efficient method to copy a part of one WriteableBitmap to another WriteableBitmap would be to use the Crop function:
var DstImg = SrcImg.Crop(new Rect(...));
Note that your SrcImg WriteableBitmap must be in the Pbgra32 format to be operated on by the WriteableBitmapEx library. If your bitmap isn't in this form, then you can easily convert it before cropping:
var tmp = BitmapFactory.ConvertToPbgra32Format(SrcImg);
var DstImg = tmp.Crop(new Rect(...));
public static void CopyPixelsTo(this BitmapSource sourceImage, Int32Rect sourceRoi, WriteableBitmap destinationImage, Int32Rect destinationRoi)
{
var croppedBitmap = new CroppedBitmap(sourceImage, sourceRoi);
int stride = croppedBitmap.PixelWidth * (croppedBitmap.Format.BitsPerPixel / 8);
var data = new byte[stride * croppedBitmap.PixelHeight];
// Is it possible to Copy directly from the sourceImage into the destinationImage?
croppedBitmap.CopyPixels(data, stride, 0);
destinationImage.WritePixels(destinationRoi,data,stride,0);
}

Can't acess Image<Bgr, byte>.Data property in C# emgucv

I could access each pixel of a Gray image using Data[,,] but cannot do so for Bgr image.
I have written the following code:
Image<Bgr, byte> currentFrame = capture.QueryFrame();
Image<Gray, byte> grayFrame = currentFrame.Convert<Gray, byte>();
Byte gray = grayFrame.Data[0, 10, 0];
Byte blue = currentFrame.Data[0, 10, 0];
which throws an exception: Object reference not set to an instance of an object.
I checked by adding breakpoint and the result was this:
currentFrame.Data is nul
grayFrame.Data has 3d array
gray has value 71
and then the next line caused error
Why is the currentFrame.Data, which should have been a 3d array, null? How can I access Image.Data property for Bgr image?
I am using emgucv 2.2.1. Same problem occured with 2.1 version.
Thanks for any help
What I have found is quite surprising.
Image<Bgr, byte> currentFrame = capture.QueryFrame();
byte b;
try
{
b = image.Data[0, 0, 0]; //Line (A)
}
catch (Exception ex)
{ MessageBox.Show("Before Convert : "+ex.Message); }
image = image.Convert<Bgr, byte>();
try
{
b = image.Data[0, 0, 0]; //Line (B)
}
catch (Exception ex)
{ MessageBox.Show("AfterConvert : "+ex.Message); }
In the above code: Line (A) throws exception "Object refence not set to an instance of an object". But after adding the code
image = image.Convert<Bgr, byte>();
Line(B) runs smoothly without any exception.
Does anyone know why this is hapenning?
[Though it is very late to answer, I am posting this for future reference.]
Recently I faced the same problem. I could not access pixel values of an image returned by QueryFrame through Data property. But after applying any operation (e.g. resize, convert) resulting image is accessible. The reason behind this and the solution is quite simple.
Data property of an image returned by QueryFrame is always null. Its image data is stored in unmanaged memory, therfore is not accessible thorough Data property. To access the pixel values of a frame, you just have to clone it.
Image<Bgr, byte> currentFrame = capture.QueryFrame();
Image<Bgr, byte> frame = currentFrame.Clone();
// now access using Data property
byte b = frame.Data[0,0,0];
byte g = frame.Data[0,0,1];
byte r = frame.Data[0,0,2];
capture.QueryFrame() returns null, which you try to access on the last line. I'd look inside that method. I'm guessing .Convert is an extension method, and that that method checks for null and returns something that is not null.
Heyy there,try to add .xml file in your .../bin/Debug. Then type in your ProcesFrame Method: //haar is HaarCascade haar = new HaarCascade("haarcascade_frontalface_default.xml");

Categories

Resources