I am doing project in Emgu cv in C#.
I have stuck in this first step. I calculated opticalflow.HS and LK and I don't know how to add velx and vely to draw them in frame as points and show them in ImageBox.
OpticalFlow.HS(prev, frame1, true, velx, vely, 0.1d, new MCvTermCriteria(100));
Does anyone can describe me what to do or even better some code example will be a lot of help? I don't want to show color of direction, only motion as points in frame.
I found the solution, so if anyone needs here is my example.
Image<Gray, Byte> coloredMotion2 = new Image<Gray, Byte>(frame1.Size);
for (int i = 0; i < coloredMotion2.Width; i+=2)
{
for (int j = 0; j < coloredMotion2.Height; j+=2)
{
dx = (int)CvInvoke.cvGetReal2D(velx, j, i);
dy = (int)CvInvoke.cvGetReal2D(vely, j, i);
int pomi = i + dx;
int pomj = j + dy;
if (i != pomi && j != pomj)
// uncoment line below if you want lines but it needs rgb image not gray {
//CvInvoke.cvLine(coloredMotion, new Point(i,j),new Point(i+dx,j+dy), new MCvScalar(255,0,0), 1, Emgu.CV.CvEnum.LINE_TYPE.CV_AA, 0);
CvInvoke.cvCircle(coloredMotion2, new Point(pomi, pomj), 1, new MCvScalar(255,255,255), 1, Emgu.CV.CvEnum.LINE_TYPE.CV_AA, 0);
}
motionImageBox.Image = coloredMotion2;
Related
I got a large set of infrared images of seeds, their sizes vary slightly.
And I would like to find them (in thefastest possible way).
Below i show zoomed in details of the images i process.
After a first noise removal and blob filter this is what i have :
The bright white is just direct reflection of the IR lamp, white pixels never combine (stretch out) over multiple seeds.
To make it more clear i placed a letter on some individual seeds.
The problems i have:
A is a single seed (dirt on the seed) generates a slight dark line.
B the nearby X close to it is at its darkest intersection its still brighter as some other seeds (cannt change brightnes or remove if gray value is lower then a certain value.
C those are 3 seeds close to each other.
The smallest visible seeds above should not becomme even smaller.
I'm not making use of mathlab or openCV, as i work directly with locked image and memory data. i can acces pixel data by array or simple getpixel / putpixel commands. I wrote my own graphics library which is fast enough for live camera data, processing speed currently at around 13ms at around 25ms i enter stream processing lag
I wonder how to separate those 'cloudy' blobs better.
I'm thinking to find local maxima over a certain pixels range..but that should see A as one seed, while on B find that B and X are not connected.
So I'm not sure here, how such a local peek function or another function should like like. Although i code in C# i looked at other c++ functions as well like dilate etc but thats not it. I also wrote a function to check for slope degree (like if it was a mountain height image) but that couldnt devide areas B and C.
Ok well i made different slope detection code, now i dont look for some degree but just the tilting point over a small range, it works nice on X axis.. but essentially i think it should work on both X and Y
here's the new result :
It can resolve issue A and B !!!
However it wouldnt differentiate between seeds who are aligned in a vertical row, and and it causes small white noise (non connected lines). at places where is there is nearly nothing to detect. I'm not yet sure on how to do the same (combined) over Y axis to get the tops then erase stuff from a certain distance of the top.. (to seperate).
Using this code just showing the pixel operations of it.
for (int y = raw.Height;y>5; y--)
{slopeUP = false;
int[] peek = new int[raw.Width];
for (int x = raw.Width; x> 7; x--)
{
int a = raw.GetPixelBleu(x, y);
int b = raw.GetPixelBleu(x - 1, y);
int c = raw.GetPixelBleu(x - 2, y);
int d = raw.GetPixelBleu(x - 11, y);
int f = raw.GetPixelBleu(x - 12, y);
if ((f + d) > (a + b))slopeUP = true;
if ((f + d) < (a + b))
{
if (slopeUP)
{
slopeUP = false;
peek[x - 6] = 10;
//raw.SetPixel(x, y, Color.GreenYellow);
}
else peek[x - 6] = 0;
}
}
for (int x = raw.Width; x > 7; x--)
{ if (peek[x-1] > 5) raw.SetPixel(x, y, Color.Lavender); }
}
In this SO answer to a similar question I applied persistent homology to find peaks in an image. I took your image, scaled it down to 50%, applied an Guassian blur with radius 20 (in Gimp), and applied the methods described in the other article (click to enlarge):
I only show peaks with persistence (see the other SO answer) of at least 20. The persistence diagram is shown here:
The 20-th peak would be the little peak on the top-left corner with a persistence of about 9. By applying a stronger Gaussian filter the image gets more diffuse and the peaks will be come more prominent.
Python code can be found here.
So, as far as speed, I am only going off of the image you posted here... on which everything runs blazing fast because it is tiny. Note that I padded the image after binarizing and never un-padded, so you will want to either un-pad or shift your results accordingly. You may not even want to pad, but it allows detection of cut off seeds.
Overview of pipeline: removeSaturation>>gaussian blur>>binarize>>padd>>distanceTransform>>peaks>>clustering
That being said here is my code and results:
void drawText(Mat & image);
void onMouse(int event, int x, int y, int, void*);
Mat bruteForceLocalMax(Mat srcImage, int searchRad);
void zoomPixelImage(Mat sourceImage, int multFactor, string name, bool mouseCallback);
Mat mergeLocalPeaks(Mat srcImage, int mergeRadius);
Mat image;
bool debugDisplays = false;
int main()
{
cout << "Built with OpenCV " << CV_VERSION << endl;
TimeStamp precisionClock = TimeStamp();
image = imread("../Raw_Images/Seeds1.png",0);
if (image.empty()) { cout << "failed to load image"<<endl; }
else
{
zoomPixelImage(image, 5, "raw data", false);
precisionClock.labeledlapStamp("image read", true);
//find max value in image that is not oversaturated
int maxVal = 0;
for (int x = 0; x < image.rows; x++)
{
for (int y = 0; y < image.cols; y++)
{
int val = image.at<uchar>(x, y);
if (val >maxVal && val !=255)
{
maxVal = val;
}
}
}
//get rid of oversaturation regions (as they throw off processing)
image.setTo(maxVal, image == 255);
if (debugDisplays)
{zoomPixelImage(image, 5, "unsaturated data", false);}
precisionClock.labeledlapStamp("Unsaturate Data", true);
Mat gaussianBlurred = Mat();
GaussianBlur(image, gaussianBlurred, Size(9, 9), 10, 0);
if (debugDisplays)
{zoomPixelImage(gaussianBlurred, 5, "blurred data", false);}
precisionClock.labeledlapStamp("Gaussian", true);
Mat binarized = Mat();
threshold(gaussianBlurred, binarized, 50, 255, THRESH_BINARY);
if (debugDisplays)
{zoomPixelImage(binarized, 5, "binarized data", false);}
precisionClock.labeledlapStamp("binarized", true);
//pad edges (may or may not be neccesary depending on setup)
Mat paddedImage = Mat();
copyMakeBorder(binarized, paddedImage, 1, 1, 1, 1, BORDER_CONSTANT, 0);
if (debugDisplays)
{zoomPixelImage(paddedImage, 5, "padded data", false);}
precisionClock.labeledlapStamp("add padding", true);
Mat distTrans = Mat();
distanceTransform(paddedImage, distTrans, CV_DIST_L1,3,CV_8U);
if (debugDisplays)
{zoomPixelImage(distTrans, 5, "distanceTransform", true);}
precisionClock.labeledlapStamp("distTransform", true);
Mat peaks = Mat();
peaks = bruteForceLocalMax(distTrans,10);
if (debugDisplays)
{zoomPixelImage(peaks, 5, "peaks", false);}
precisionClock.labeledlapStamp("peaks", true);
//attempt to cluster any colocated peaks and find the best clustering count
Mat mergedPeaks = Mat();
mergedPeaks = mergeLocalPeaks(peaks, 5);
if (debugDisplays)
{zoomPixelImage(mergedPeaks, 5, "peaks final", false);}
precisionClock.labeledlapStamp("final peaks", true);
precisionClock.fullStamp(false);
waitKey(0);
}
}
void drawText(Mat & image)
{
putText(image, "Hello OpenCV",
Point(20, 50),
FONT_HERSHEY_COMPLEX, 1, // font face and scale
Scalar(255, 255, 255), // white
1, LINE_AA); // line thickness and type
}
void onMouse(int event, int x, int y, int, void*)
{
if (event != CV_EVENT_LBUTTONDOWN)
return;
Point pt = Point(x, y);
std::cout << "x=" << pt.x << "\t y=" << pt.y << "\t value=" << int(image.at<uchar>(y,x)) << "\n";
}
void zoomPixelImage(Mat sourceImage, int multFactor, string name, bool normalized)
{
Mat zoomed;// = Mat::zeros(sourceImage.rows*multFactor, sourceImage.cols*multFactor, CV_8U);
resize(sourceImage, zoomed, Size(sourceImage.cols*multFactor, sourceImage.rows*multFactor), sourceImage.cols*multFactor, sourceImage.rows*multFactor, INTER_NEAREST);
if (normalized) { normalize(zoomed, zoomed, 0, 255, NORM_MINMAX); }
namedWindow(name);
imshow(name, zoomed);
}
Mat bruteForceLocalMax(Mat srcImage, int searchRad)
{
Mat outputArray = Mat::zeros(srcImage.rows, srcImage.cols, CV_8U);
//global search top
for (int x = 0; x < srcImage.rows - 1; x++)
{
for (int y = 0; y < srcImage.cols - 1; y++)
{
bool peak = true;
float centerVal = srcImage.at<uchar>(x, y);
if (centerVal == 0) { continue; }
//local search top
for (int a = -searchRad; a <= searchRad; a++)
{
for (int b = -searchRad; b <= searchRad; b++)
{
if (x + a<0 || x + a>srcImage.rows - 1 || y + b < 0 || y + b>srcImage.cols - 1) { continue; }
if (srcImage.at<uchar>(x + a, y + b) > centerVal)
{
peak = false;
}
if (peak == false) { break; }
}
if (peak == false) { break; }
}
if (peak)
{
outputArray.at<uchar>(x, y) = 255;
}
}
}
return outputArray;
}
Mat mergeLocalPeaks(Mat srcImage, int mergeRadius)
{
Mat outputArray = Mat::zeros(srcImage.rows, srcImage.cols, CV_8U);
//global search top
for (int x = 0; x < srcImage.rows - 1; x++)
{
for (int y = 0; y < srcImage.cols - 1; y++)
{
float centerVal = srcImage.at<uchar>(x, y);
if (centerVal == 0) { continue; }
int aveX = x;
int aveY = y;
int xCenter = -1;
int yCenter = -1;
while (aveX != xCenter || aveY != yCenter)
{
xCenter = aveX;
yCenter = aveY;
aveX = 0;
aveY = 0;
int peakCount = 0;
//local search top
for (int a = -mergeRadius; a <= mergeRadius; a++)
{
for (int b = -mergeRadius; b <= mergeRadius; b++)
{
if (xCenter + a<0 || xCenter + a>srcImage.rows - 1 || yCenter + b < 0 || yCenter + b>srcImage.cols - 1) { continue; }
if (srcImage.at<uchar>(xCenter + a, yCenter + b) > 0)
{
aveX += (xCenter + a);
aveY += (yCenter + b);
peakCount += 1;
}
}
}
double dCentX = ((double)aveX / (double)peakCount);
double dCentY = ((double)aveY / (double)peakCount);
aveX = floor(dCentX);
aveY = floor(dCentY);
}
outputArray.at<uchar>(xCenter, yCenter) = 255;
}
}
return outputArray;
}
speed:
debug images:
results:
Hope this helps! Cheers!
I want to detect a display on an image (more precisely its corners).
I segment the image in display color and not display color:
Image<Gray, byte> segmentedImage = greyImage.InRange(new Gray(180), new Gray(255));
Then I use corner Harris to find the corners:
Emgu.CV.Image<Emgu.CV.Structure.Gray, Byte> harrisImage = new Image<Emgu.CV.Structure.Gray, Byte>(greyImage.Size);
CvInvoke.CornerHarris(segmentedImage, harrisImage, 2);
CvInvoke.Normalize(harrisImage, harrisImage, 0, 255, NormType.MinMax, DepthType.Cv32F);
There are now white pixels in the corners, but I cannot access them:
for (int j = 0; j < harrisImage.Rows; j++)
{
for (int i = 0; i < harrisImage.Cols; i++)
{
Console.WriteLine(harrisImage[j, i].Intensity);
}
}
It writes only 0s. How can I access them? And if I can access them, how can I find the 4 corners of the screen in the harris image? Is there a function to find a perspectively transformed rectangle from points?
EDIT:
On the OpenCV IRC they said FindContours is not that precise. And when I try to run it on the segmentedImage, I get this:
(ran FindContours on the segmentedImage, then ApproxPolyDP and drew the found contour on the original greyscale image)
I cannot get it to find the contours more precise...
EDIT2:
I cannot get this to work for me. Even with your code, I get the exact same result...
Here is my full Emgu code:
Emgu.CV.Image<Emgu.CV.Structure.Gray, Byte> imageFrameGrey = new Image<Emgu.CV.Structure.Gray, Byte>(bitmap);
Image<Gray, byte> segmentedImage = imageFrameGrey.InRange(new Gray(180), new Gray(255));
// get rid of small objects
int morph_size = 2;
Mat element = CvInvoke.GetStructuringElement(Emgu.CV.CvEnum.ElementShape.Rectangle, new System.Drawing.Size(2 * morph_size + 1, 2 * morph_size + 1), new System.Drawing.Point(morph_size, morph_size));
CvInvoke.MorphologyEx(segmentedImage, segmentedImage, Emgu.CV.CvEnum.MorphOp.Open, element, new System.Drawing.Point(-1, -1), 1, Emgu.CV.CvEnum.BorderType.Default, new MCvScalar());
// Find edges that form rectangles
List<RotatedRect> boxList = new List<RotatedRect>();
using (VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint())
{
CvInvoke.FindContours(segmentedImage, contours, null, Emgu.CV.CvEnum.RetrType.External, ChainApproxMethod.ChainApproxSimple);
int count = contours.Size;
for (int i = 0; i < count; i++)
{
using (VectorOfPoint contour = contours[i])
using (VectorOfPoint approxContour = new VectorOfPoint())
{
CvInvoke.ApproxPolyDP(contour, approxContour, CvInvoke.ArcLength(contour, true) * 0.01, true);
if (CvInvoke.ContourArea(approxContour, false) > 10000)
{
if (approxContour.Size == 4)
{
bool isRectangle = true;
System.Drawing.Point[] pts = approxContour.ToArray();
LineSegment2D[] edges = Emgu.CV.PointCollection.PolyLine(pts, true);
for (int j = 0; j < edges.Length; j++)
{
double angle = Math.Abs(edges[(j + 1) % edges.Length].GetExteriorAngleDegree(edges[j]));
if (angle < 80 || angle > 100)
{
isRectangle = false;
break;
}
}
if (isRectangle)
boxList.Add(CvInvoke.MinAreaRect(approxContour));
}
}
}
}
}
So as promised i tried it myself. In C++ but you should adopt it easy to Emgu.
First i get rid of small object in your segmented image with an opening:
int morph_elem = CV_SHAPE_RECT;
int morph_size = 2;
Mat element = getStructuringElement(morph_elem, Size(2 * morph_size + 1, 2 * morph_size + 1), Point(morph_size, morph_size));
// Apply the opening
morphologyEx(segmentedImage, segmentedImage_open, CV_MOP_OPEN, element);
Then detect all the contours and take the large ones and check for rectangular shape:
vector< vector<Point>> contours;
findContours(segmentedImage_open, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
for each (vector<Point> var in contours)
{
double area = contourArea(var);
if (area> 30000)
{
vector<Point> approx;
approxPolyDP(var, approx, 0.01*arcLength(var, true), true);
if (4 == approx.size()) //rectangular shape
{
// do something
}
}
}
Here is the result with the contour in red and the approximated curve in green:
Edit:
You can improve your code by increasing the approximation factor until you get a contour with 4 points or you pass a threshold. Just wrap a for loop around approxPolyDP. You can define a range for your approximation value and prevent your code to fail if your object differs too much from a rectangle.
I am trying to perform a basic number-only OCR on an image by comparing it to bitmaps of the numbers 0 - 9, using the code below. I have tried to follow the code in the answer to this question, but it is not returning the correct results. There are 2 main issues that I am facing:
1: If the program determines that the number 0 is present at any given point, then it also determines that 1, 2, 3, ... , and 9 are present at that location, which is obviously not true.
2: The locations that it finds numbers in ... most of the locations are blank (white) spaces.
I'll be the first to admit that using the lockbits method is new to me, as I usually use the getPixel() method of comparing, but it was far too slow for this project, so I may be making a rookie mistake or 2.
Thanks for the help!!!
P.S. The image to OCR is RTA, and I believe everything else is self-explanatory.
void newOCR()
{
Rectangle rect = new Rectangle(0, 0, 8, 9);
Rectangle numRect = new Rectangle(0, 0, 8, 9);
for (int i = 0; i < RTA.Width - 8; i++)
{
for (int j = 0; j < RTA.Height - 9; j++)
{
rect.Location = new Point(i, j);
for (int n = 0; n < numbers.Length; n++)
{
System.Drawing.Imaging.BitmapData data = RTA.LockBits(rect, System.Drawing.Imaging.ImageLockMode.ReadOnly, RTA.PixelFormat);
System.Drawing.Imaging.BitmapData numData = numbers[n].LockBits(numRect, System.Drawing.Imaging.ImageLockMode.ReadOnly, numbers[n].PixelFormat);
unsafe
{
byte* ptr = (byte*)data.Scan0.ToPointer();
byte* numPtr = (byte*)data.Scan0.ToPointer();
int width = rect.Width * Image.GetPixelFormatSize(data.PixelFormat) / 8;
for(int y = 0; y < rect.Height; y++)
{
bool outBreak = false;
for(int x = 0; x < width; x++)
{
if(*ptr != *numPtr)
{
outBreak = true;
break;
}
else if(y == rect.Height - 1 && x == width - 1)
{
timeDict.Add(new Point(i, j), n);
}
ptr++;
numPtr++;
}
if(outBreak)
{
break;
}
ptr += data.Stride - width;
numPtr += numData.Stride - width;
}
RTA.UnlockBits(data);
numbers[n].UnlockBits(numData);
}
}
}
}
}
There is a (probably copy/paste) mistake in the following line
byte* numPtr = (byte*)data.Scan0.ToPointer();
which is causing comparing the bitmap to itself. It should be
byte* numPtr = (byte*)numData.Scan0.ToPointer();
private void RemoveUnfitChromosomes()
{
int[] buildingLW = new int[2 * buildingNo]; //2x for length & width
for (int i = 0; i <= 2 * (buildingNo - 1);) //working values to be changed for user input
{
buildingLW[i] = 30; //l,x
buildingLW[i + 1] = 10; //w.y
i = i + 2;
}
foreach (int[] chromosome in population)
{
int[] temp = new int[3 * buildingNo];
List<Rectangle> building = new List<Rectangle>();
Array.Copy(chromosome, temp, 3 * buildingNo);
for (int j = 0; j < buildingNo; j++)
{
building[j] = new Rectangle(temp[3 * j] - buildingLW[2 * j] / 2, temp[3 * j + 1] + buildingLW[2 * j + 1] / 2, buildingLW[2 * j + 1], buildingLW[2 * j]);
RotateTransform rotate = new RotateTransform();
rotate.Angle = temp[j+2];
rotate.CenterX = temp[j];
rotate.CenterY = temp[j+1];
building[j].RenderTransform= rotate;
}
My problem is here building[j].RenderTransform= rotate;, which I will have this extension method RenderTransform not accepting a first argument of type "System.Drawing.Rectangle". I understand that the rectangle structure does not have a method for rotation. I am just confused by the information I am having.
The above is my code where I am trying import my data from another part of my code and then rotate my rectangle built from those data. FYI, I am trying to see if any of those rectangles intersection with one another. I am confused with the following code from http://msdn.microsoft.com/en-us/library/ms754009(v=vs.110).aspx?cs-save-lang=1&cs-lang=csharp#code-snippet-2. Their code was able to use polyline2.RenderTransform, so is there a way I can do it too?
I have tried searching online for hours, but I'm new to programming and cant understand what people are saying sometimes, so my apologies if the question is repeated. Thank you for your time.
RenderTransform is not an extension method, it's a property and Rectangle has it as well. System.Windows.Shapes.Rectangle not System.Drawing.Rectangle
I have a matrix (2D) of an image in EMGU cv,how can I fill the rest of the matrix with zeros but keep a certain area(rectangle) with the original data?
Method 1
One way to achieve what your after is to access the Data of the Matrix directly and set the values to '0' the following code will set the lower quarter of 'My_Matrix' to 0 where all other values will remain 20.
Matrix<double> My_Matrix_Image = new Matrix<double>(8,10);
My_Matrix_Image.SetValue(20); //set all values of Matrix to 20
for(int i = 4; i< My_Matrix_Image.Height; i++)
{
for (int j = 5; j < My_Matrix_Image.Width; j++)
{
My_Matrix_Image.Data[i,j] = 0;
}
}
To achieve what you wanted from your comment you must still take the same approach My_Matrix_Image would contain your image data values from 0-255 I will give solutions for both greyscale and colour images. I will keep the first 30 rows of the image and set the rest to zero the image will be 100x100 pixels in size. Change j = 30 to alter the amount of rows.
First Colour Image Matrix there will be 3 channels Red, Green and Blue:
for(int i = 0; i< My_Matrix_Image.Height; i++)
{
for (int j = 30; j < My_Matrix_Image.Width; j++)
{
My_Matrix_Image.Data[i,j,0] = 0; //RED
My_Matrix_Image.Data[i,j,1] = 0; //GREEN
My_Matrix_Image.Data[i,j,2] = 0; //BLUE
}
}
Greyscale Image Matrices are easier as there will be 1 channel:
for(int i = 0; i< My_Matrix_Image.Height; i++)
{
for (int j = 30; j < My_Matrix_Image.Width; j++)
{
My_Matrix_Image.Data[i,j,0] = 0;
}
}
Now lets assume you want to keep the Middle 40 Rows you will have to add an additional loop remember this is the only way with a Matrix I have only provided the colour image example as you can see it starts getting a little messy and Method 2 may be better:
for(int i = 0; i< My_Matrix_Image.Height; i++)
{
//LOOP 1 SET THE FRONT ROWS
for (int j = 0; j<40; j++)
{
My_Matrix_Image.Data[i,j,0] = 0; //RED
My_Matrix_Image.Data[i,j,1] = 0; //GREEN
My_Matrix_Image.Data[i,j,2] = 0; //BLUE
}
// LOOP 2 SET THE BACK ROWS
for (int j = 60; j < My_Matrix_Image.Width; j++)
{
My_Matrix_Image.Data[i,j,0] = 0; //RED
My_Matrix_Image.Data[i,j,1] = 0; //GREEN
My_Matrix_Image.Data[i,j,2] = 0; //BLUE
}
}
Method 2
Now lets assume you do want to keep a rectangle of data. Creating 6 Loops is complex and inefficient so here is what you could do.
//Make a Blank Copy of your Image this will be automatically full of zeros
Matrix<double> My_Image_Copy = My_Image_Matrix.CopyBlank();
//Now copy the data you want to keep from your original image into you blank copy
for(int i = 40; i< 60; i++)
{
for (int j = 40; j < 60; j++)
{
My_Image_Copy.Data[i,j,0] = My_Matrix_Image.Data[i,j,0]; //RED
My_Image_Copy.Data[i,j,1] = My_Matrix_Image.Data[i,j,1]; //GREEN
My_Image_Copy.Data[i,j,2] = My_Matrix_Image.Data[i,j,2]; //BLUE
}
}
The above code will copy the centre 20x20 pixels from an image you can obviously change this to copy whole rows by using for(int i = 0; i< My_Matrix_Image.Height; i++)
Much better I'm sure you will agree.
Alternative
Now while you are using a Matrix to store you data using a Image construct makes coding a little simpler. While this may not be relevant to you it may be to others.
If you use Image or alternative to store your Image data then this can be achieved by:
Image<Gray, Byte> My_Image = new Image<Gray, byte>(openfile.FileName);
Image<Gray, Byte> My_Image_Copy = My_Image.CopyBlank();
Rectangle Store_ROI = my_image.ROI; //Only need one as both images same size
My_Image.ROI = new Rectangle(50, 50, 100, 100);
My_Image_Copy.ROI = new Rectangle(50, 50, 100, 100);
My_Image_Copy = My_Image.Copy(); //This will only copy the Region Of Interest
//Reset the Regions Of Interest so you will now operate on the whole image
My_Image.ROI = Store_ROI;
My_Image_Copy.ROI = Store_ROI;
Now this the same as Method 2 but you don't need to sort write out loops where errors can occur.
Hope this correction answers your question,
Cheers
Chris
// Rectangle's parameters
static int x = 3;
static int y = 3;
static int width = 2;
static int height = 2;
Rectangle specificArea = new Rectangle(x, y, width, height);
// Your image matrix; 20x20 just a sample
Matrix<int> imageMatrix = new Matrix<int>(20, 20);
public Matrix<int> cropMatrix()
{
// Crop a specific area from image matrix
Matrix<int> specificMatrix = imageMatrix.GetSubRect(specificArea);
// Matrix with full of zeros and same size with imageMatrix
Matrix<int> croppedMatrix = imageMatrix.CopyBlank();
for (int i = x; i < x+width; i++)
{
for (int j = y; j < y+height; j++)
{
// Set croppedMatrix with saved values
croppedMatrix[i, j] = specificMatrix[i-x, j-y];
}
}
return croppedMatrix;
}