OpenCV C# Frame Differencing - c#

I'm new to computer vision and currently playing around with static frame differencing to try and determine whether there is motion in video.
My variables:
public Mat currentFrame = new Mat();
public Mat prevFrame = new Mat();
public Mat result = new Mat();
bool motion = false;
Simple differencing function (being called every frame):
public Mat getDifference(Mat videoFrame)
{
currentFrame = videoFrame.Clone();
Cv2.Absdiff(currentFrame, prevFrame, result);
prevFrame = currentFrame.Clone();
return result;
}
When motion exists the result matrix looks like this:
When motion doesn't exist the result matrix looks like this (empty):
My original idea was that if the result matrix is effectively empty (all black), then I could say motion = false. However, this is proving to be more difficult that anticipated since it is technically never empty, so I can't say:
if(!result.Empty())
{
motion = true;
}
Without the need of for loops and pixel by pixel analysis, is there a simple/clean 'if' statement I can use that simply says (if the matrix contains anything that isn't black pixels, motion = true). Or... is this too simplistic? I'm open to hearing better ways of doing this, I had a look around on the web but there aren't many solid examples for C#. My video is playing within a WPF application in real-time so nested for loops are to be avoided.
Thanks for your time!

You could for example convert the matrix to an image. That should give you access to all the image manipulation functions. For example ThresholdBinary to make pixels either zero or a given value, and CountNonZero. That should give you some tools balance how much things need to change, and how large area need to change.

Found a simple way to do it, may not be the best but it does work.
public bool motion = false;
public Mat currentFrame = new Mat();
public Mat prevFrame = new Mat();
public Mat absDiffImage = new Mat();
public Mat grayImage = new Mat();
public Point[][] frameContours;
public HierarchyIndex[] external;
public Mat frameThresh = new Mat();
Cv2.CvtColor(currentFrame, currentFrame, ColorConversionCodes.BGRA2GRAY);
Cv2.Absdiff(currentFrame, prevFrame, absDiffImage);
Cv2.Threshold(absDiffImage, frameThresh, 80, 255, ThresholdTypes.Binary);
Cv2.FindContours(frameThresh, out frameContours, out external, RetrievalModes.List, ContourApproximationModes.ApproxSimple);
if (frameContours.Length > 20)
{
motion = true;
}
else
{
motion = false;
}

Related

EmguCV Tracker implementation in C#

Looking for example code to understand how to implement EmguCV tracker. I tried few things like in this poorly written code:
class ObjectTracker
{
Emgu.CV.Tracking.TrackerCSRT tracker = new Emgu.CV.Tracking.TrackerCSRT();
bool isActive = false;
public bool trackerActive = false;
public void Track(Bitmap pic, Rectangle selection,out Rectangle bound)
{
Rectangle result = new Rectangle();
Bitmap bitmap=pic; //This is your bitmap
Emgu.CV.Image<Bgr, Byte> imageCV = new Emgu.CV.Image<Bgr, byte>(bitmap); //Image Class from Emgu.CV
Emgu.CV.Mat mat = imageCV.Mat; //This is your Image converted to Mat
if (tracker.Init(mat,selection))
{
while (tracker.Update(mat, out bound))
{
result = bound;
}
}
bound = result;
}
I'm aware of there is few logic flaws, but still I couldn't menage to get any result in different attempts.
Thanks!
Turns out, it is really simple.
First define a Tracker in a type you desire.
Example:
Emgu.CV.Tracking.TrackerCSRT Tracker= new Emgu.CV.Tracking.TrackerCSRT();
Then before scaning the selected area(RoI) you need to initialize tracker
Example:
Tracker.Initimage, roi);
Important Note: image must be in type of Emgu.CV.Mat , roi Drawing.Rectangle.
To convert your Bitmap to Mat you can use following method:
Example:
public Emgu.CV.Mat toMat(Bitmap pic)
{
Bitmap bitmap=pic;
Emgu.CV.Image<Bgr, Byte> imageCV = new Emgu.CV.Image<Bgr, byte>(bitmap);
Emgu.CV.Mat mat = imageCV.Mat;
return mat;
}
Note: This code belongs to someone else from stackoverflow which I forget the source. Thanks to him/her.
Finally using following statement will returns Rectangle that envelopes tracked object. Trick is object doesn't return Rectangle as method output, it returns it via out statement.
tracker.Update(image, out roi);
Last Note: If anyone knows how to improve performance or multithread this method, please leave a comment.
Have a nice day!

aruco.net - How to find marker orientation

I am trying to use openCV.NET to read scanned forms. The problem is that sometimes the positions of the relevant regions of interest and the alignment may differ depending on the printer it was printed form and the way the user scanned the form.
So I thought I could use an ArUco marker as a reference point as there are libraries (ArUco.NET) already built to recognize them. I was hoping to find out how much the ArUco code is rotated and then rotate the form backwards by that amount to make sure the text is straight. Then I can use the center of the ArUco code as a reference point to use OCR on specific regions on the form.
I am using the following code to get the OpenGL modelViewMatrix. However, it always seems to be the same numbers no matter which angle the ArUco code is rotated. I only just started with all of these libraries but I thought that the modelViewMatrix would give me different values depending on the rotation of the marker. Why would it always be the same?
Mat cameraMatrix = new Mat(3, 3, Depth.F32, 1);
Mat distortion = new Mat(1, 4, Depth.F32, 1);
using (Mat image2 = OpenCV.Net.CV.LoadImageM("./image.tif", LoadImageFlags.Grayscale))
{
using (var detector = new MarkerDetector())
{
detector.ThresholdMethod = ThresholdMethod.AdaptiveThreshold;
detector.Param1 = 7.0;
detector.Param2 = 7.0;
detector.MinSize = 0.01f;
detector.MaxSize = 0.5f;
detector.CornerRefinement = CornerRefinementMethod.Lines;
var markerSize = 10;
IList<Marker> detectedMarkers = detector.Detect(image2, cameraMatrix, distortion);
foreach (Marker marker in detectedMarkers)
{
Console.WriteLine("Detected a marker top left at: " + marker[0].X + #" " + marker[0].Y);
//Upper 3x3 matrix of modelview matrix (0,4,8,1,5,9,2,6,10) is called rotation matrix.
double[] modelViewMatrix = marker.GetGLModelViewMatrix();
}
}
}
It looks like you have not initialized your camera parameters.
cameraMatrix and distortion are the intrinsic parameters of your camera. You can use OpenCV to find them.
This is vor OpenCV 2.4 but will help you to understand the basics:
http://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html
If you have found them you should be able to get the parameters.

openTK: run for loop on gpu?

I am currently developping a C# game engine in XNA, I am currently working on a routine for masking a 2D texture with another 2D texture.
so far I have this routine:
public static Texture2D MaskToTexture2DByTexture2D(GraphicsDevice device, Texture2D texture, Texture2D mask)
{
Texture2D output = new Texture2D(device, texture.Width, texture.Height);
int numberOfPixels = texture.Width * texture.Height;
Color[] ColorArray = new Color[numberOfPixels];
Color[] maskColorArray = new Color[numberOfPixels];
float[] maskArray = new float[numberOfPixels];
mask = ResizeEngine.ResizeToSize(device, mask, new Point(texture.Width, texture.Height));
mask.GetData<Color>(maskColorArray);
maskArray = ColorEngine.ConvertColorArrayToMaskValues(maskColorArray);
texture.GetData<Color>(ColorArray);
Parallel.For(0, ColorArray.Length, index =>
{
ColorArray[index] *= maskArray[index];
});
output.SetData<Color>(ColorArray);
return output;
}
ColorEngine is currently executing following method:
public static float[] ConvertColorArrayToMaskValues(Color[] colors)
{
float[] mask = new float[colors.Count()];
Parallel.For(0, colors.Length, index => { mask[index] = ConvertColorToMaskValue(colors[index]); });
return mask;
}
public static float ConvertColorToMaskValue(Color color)
{
float mask = (color.R + color.G + color.B) / 765.0f;
return mask;
}
this works, but not on a basis where I can use it in the real time render routine, I'd love to replace the Parallel.For loops into a loop executed by the GPU in parallel. I imported the OpenTK library, but I can't find any documentation for GPU code execution appart from default graphic drawcalls.
is this possible? Or am I wasting my time here?
OpenTK is a CLI/CLR/.Net binding for OpenGL (and other media APIs). OpenGL is a drawing API aimed at GPUs. Programms running on GPUs that control drawing operations are called "shaders". OpenGL has functions to load and use shaders for its drawing operations. Thus if you want to use OpenTK → OpenGL you'll have to learn the OpenGL API and how to write OpenGL shader programs.
is this possible? Or am I wasting my time here?
most certainly. This is a prime example of what you'd do in a so called fragment shader.

How do I access the rotation and translation vectors after camera calibration in emgu CV?

The goal of a camera calibration is to find the intrinsic and extrinsic parameters:
The intrinsic ones are those that describe the camera itself (focal
length, distortion, etc.) I get values for those, no problem.
The extrinsic parameters are basically the position of the camera. When I try to access those I get an AccessViolationException.
One way to perform such calibration is to
take an image of a calibration target with known corners
find those corners in the image
from the correspondence between 3D and 2D points, find the matrix that transforms one into the other
that matrix consists of the intrinsic and extrinsic parameters.
The call to the calibration function looks like this:
Mat[] rotationVectors = new Mat[1];
Mat[] translationVectors = new Mat[1];
double error = CvInvoke.CalibrateCamera(realCorners,
detectedCorners,
calibrationImages[0].Image.Size,
cameraMatrix,
distortionCoefficients,
0,
new MCvTermCriteria(30, 0.1),
out rotationVectors,
out translationVectors);
Console.WriteLine(rotationVectors[0].Size); // AccessViolationException
I only use one image here, but I have the same problem when using more images (30) Different calibration images would yield different results for translation-/rotationVector anyway, which makes me doubt that using only 1 image is a problem.
The detection of points works and drawing them into the original image gives reasonabel results.
Both cameraMatrix and distortionCoefficients can be accessed and contain values. (I tried to only post the relevant parts of the code)
I use emgu version 3.0.0.2157
Why do I get an AccessViolationException on the rotationVectors and translationVectors?
I placed a breakpoint and found that the internal Data property is null. See screenshot of VS debugger:
That explains why I cannot access it. But why is it null in the first place?
It is because of bug in EmguCV. You are calling
public static double CalibrateCamera(
MCvPoint3D32f[][] objectPoints,
PointF[][] imagePoints,
Size imageSize,
IInputOutputArray cameraMatrix,
IInputOutputArray distortionCoeffs,
CvEnum.CalibType calibrationType,
MCvTermCriteria termCriteria,
out Mat[] rotationVectors,
out Mat[] translationVectors)
inside this method there is a call to
public static double CalibrateCamera(
IInputArray objectPoints,
IInputArray imagePoints,
Size imageSize,
IInputOutputArray cameraMatrix,
IInputOutputArray distortionCoeffs,
IOutputArray rotationVectors,
IOutputArray translationVectors,
CvEnum.CalibType flags,
MCvTermCriteria termCriteria)
IOutputArray rotationVectors should be copied to Mat[] rotationVectors. The same thing in case of translationVectors. The problem is in this loop.
There is
for (int i = 0; i < imageCount; i++)
{
rotationVectors[i] = new Mat();
using (Mat matR = rotationVectors[i]) // <- bug
matR.CopyTo(rotationVectors[i]);
translationVectors[i] = new Mat();
using (Mat matT = translationVectors[i]) // <- bug
matT.CopyTo(translationVectors[i]);
}
and there should be
for (int i = 0; i < imageCount; i++)
{
rotationVectors[i] = new Mat();
using (Mat matR = rVecs[i])
matR.CopyTo(rotationVectors[i]);
translationVectors[i] = new Mat();
using (Mat matT = tVecs[i])
matT.CopyTo(translationVectors[i]);
}
Finally to get rotation and translation values you can copy data using DataPointer
var rotation = new Matrix<float>(rotationVectors[0].Rows, rotationVectors[0].Cols, rotationVectors[0].DataPointer);

How to convert Bitmap to Mat structur in EmguCV & How to detect two images shift

Hello Dear Forum Members !
I am working on a project to detect change view from security camera. I mean, when someone try to move camera (some kind of sabotage...) I have to notice this. My idea is:
capture images from camera every 10 sec and compare this two pictures ( Old and actual picture).
There are almost 70 cameras which I need to control, so I can't use live streaming because it could occupy my internet connection. I use Emgu CV library to make this task, but during my work I stuck with some problem.. Here im piece of my code what I prepared:
public class EmguCV
{
static public Model Test(string BaseImagePath, string ActualImagePath)
{
double noise = 0;
Mat curr64f = new Mat();
Mat prev64f = new Mat();
Mat hann = new Mat();
Mat src1 = CvInvoke.Imread(BaseImagePath, 0);
Mat src2 = CvInvoke.Imread(ActualImagePath, 0);
Size size = new Size(50, 50);
src1.ConvertTo(prev64f, Emgu.CV.CvEnum.DepthType.Cv64F);
src2.ConvertTo(curr64f, Emgu.CV.CvEnum.DepthType.Cv64F);
CvInvoke.CreateHanningWindow(hann, src1.Size, Emgu.CV.CvEnum.DepthType.Cv64F);
MCvPoint2D64f shift = CvInvoke.PhaseCorrelate(curr64f, prev64f, hann, out noise );
double value = noise ;
double radius = Math.Sqrt(shift.X * shift.X + shift.Y * shift.Y);
Model Test = new Model() { osX = shift.X, osY = shift.Y, noise = value };
return Test;
}
}
Therefore, I have two questions:
How to convert Bitmap to Mat structure.
At the moment I read my images to compare from disc according to file path. But I would like to send to compare collection of bitmaps without saving on my hard drive.
Do you know any another way to detect shift between two pictures ?. I would be really grateful for any other suggestion in this area.
Regards,
Mariusz
I know its very late to answer this but today I was looking for this problem in the internet and I found something like this:
Bitmap bitmap; //This is your bitmap
Image<Bgr, byte> imageCV = new Image<Bgr, byte>(bitmap); //Image Class from Emgu.CV
Mat mat = imageCV.Mat; //This is your Image converted to Mat

Categories

Resources