I am trying to remove distortion from a cropped version of my image. I know the x and y mapping matrixes for the full size image (1920X1080). The way I am trying to do this is to first create two new mappings that contain the part of the full size mappings corresponding to the crop rectangle and then apply them to the cropped image. However, the output is just a black image:
Matrix<float> matCam1Map1, matCam1Map2;
Matrix<float> matCam1Map1Crp, matCam1Map2Crp;
matCam1Map1 = new Matrix<float>(1080, 1920);
matCam1Map2 = new Matrix<float>(1080, 1920);
CvInvoke.cvInitUndistortMap(itsCam1.IntrinsicMatrix.Ptr, itsCam1.DistortionCoeffs.Ptr, matCam1Map1.Ptr, matCam1Map2.Ptr);
matCam1Map1Crp = new Matrix<float>(rectCam1Crop.Height, rectCam1Crop.Width);
matCam1Map2Crp = new Matrix<float>(rectCam1Crop.Height, rectCam1Crop.Width);
for (int i = 0; i < rectCam1Crop.Height; i++)
{
for (int j = 0; j < rectCam1Crop.Width; j++)
{
matCam1Map1Crp.Data[i, j] = matCam1Map1.Data[rectCam1Crop.Y + i, rectCam1Crop.X + j];
matCam1Map2Crp.Data[i, j] = matCam1Map2.Data[rectCam1Crop.Y + i, rectCam1Crop.X + j];
}
}
Capture capCam1;
Image<Bgr, Byte> imgCam1New = capCam1.QueryFrame().Copy(rectCam1Crop);
Image<Bgr, Byte> imgNoDistortion = new Image<Bgr, byte>(imgCam1New.Size);
CvInvoke.cvRemap(imgCam1New, imgNoDistortion, matCam1Map1Crp, matCam1Map2Crp, 0, new MCvScalar(0));
When I apply the full size mappings to the full size image, it works perfectly, but then I have to crop it. However, I am trying to make it faster by cropping it first, and then applying the mappings to the cropped image.
Does anyone know what may be the issue with my code?
Thanks
Related
See this related question.
I want to obtain the same outcome using AForge.net framework. The output should match the following:
The output seems to be not coming as expected:
Why is the output different in AForge.net?
.
Source Code
public partial class Form1 : Form
{
public Form1()
{
InitializeComponent();
Bitmap image = (Bitmap)Bitmap.FromFile(#"StandardImage\\lena.png");
Bitmap conv = new Bitmap(image.Width, image.Height, image.PixelFormat);
ComplexImage cImage = ComplexImage.FromBitmap(image);
cImage.ForwardFourierTransform();
ComplexImage cKernel = ComplexImage.FromBitmap(image);
cImage.ForwardFourierTransform();
ComplexImage convOut = ComplexImage.FromBitmap(conv);
convOut.ForwardFourierTransform();
for (int y = 0; y < cImage.Height; y++)
{
for (int x = 0; x < cImage.Width; x++)
{
convOut.Data[x, y] = cImage.Data[x, y] * cKernel.Data[x, y];
}
}
convOut.BackwardFourierTransform();
Bitmap bbbb = convOut.ToBitmap();
pictureBox1.Image = bbbb;
}
}
The main problem is
ComplexImage cKernel = ComplexImage.FromBitmap(image);
//cImage.ForwardFourierTransform(); //<--- This line should be FFT of cKernel
cKernel.ForwardFourierTransform();
This would solve the problem you mentioned in the resulting image, but if you want to get an image similar to the bottom right image you need to do some normalization to increase the intensity of pixels.
update:
The bottom right image is actually a Fourier image so I think we should remove the BFF.
//convOut.BackwardFourierTransform();
It is seems like you are not using a gaussian kernel with Aforge,
Anyway, the library has a method for convolution with gaussian:
int w=4,h=11;
GaussianBlur filter = new GaussianBlur( w, h );
// apply the filter
filter.ApplyInPlace( image );
Try it and the output should be the same as the others.
I used the Accord Framework to implement the cross correlation between two images. My goal is to find by how much (how many pixels, and in which direction) the second image is shifted compared to the first one.
The basic formula I used is the following :
corr(a, b) = ifft(fft(a_and_zeros) * conj(fft(b_and_zeros)))
I'll put the whole code at the end of my message, everything happens on a Click event. My initial images were stored in 1024*768 bitmaps. So here is the steps I have taken :
I cropped the 2 images into the 4 zones that were interesting for me (ExpInit, ExpFinal, RefInit, RefFinal), that I want to correlate two by two (ExpInit with ExpFinal and RefInit with RefFinal). Those cropped images have dimensions of 1024*131.
I put those cropped images in the center of new bitmaps with 2^n dimensions (2048*512).
Applied a Grayscaling filter to get 8bppIndexed PixelFormat.
Converted each image to ComplexImageformat and applied forward FFT on the 4 images.
Complex-conjugating every elements in the RefFinal and ExpFinal fourier-transformed ComplexImage.
Execute element-wise multiplication between the ComplexImageobjects I want to cross-correlate (ExpInit with ExpFinal, RefInit wit RefFinal).
Apply the backward FFT to the product of the element-wise multiplication. Tadaaa, my cross-correlation is done, and I have two Complex[,] objects with the dimensions of my images (2048*512 pixels)
Now I want to answer my initial question : by how much (how many pixels, and in which direction) is the ExpFinal (respectively RefFinal) image shifted compared to the ExpInit (respectively ExpFinal). Here I am left puzzled.
I have the intutition I should be drawing a 3D graph with my Complex[,] object, where x and y are the index in the array, and z the value at the index, and search for the max value, but how do I do that with complex numbers ? Do I use only the Re part ? Only the Im part ? The amplitude ? Or am I completely mistaken ?
Bonus question : what is a good library for drawing such graphs ?
Here is the whole code for the described cross-correlation :
private void crosscorrButton_Click(object sender, EventArgs e)
{
// Cropping all 4 sections (RefInit, ExpInit, RefFinal, ExpFinal) and placing them in the center of new Bitmaps with 2^n dimensions
Rectangle rExp = new Rectangle(1, 157, 1024, 131);
Bitmap ExpInitCrop = new Bitmap(rExp.Width, rExp.Height);
Graphics g = Graphics.FromImage(ExpInitCrop);
g.DrawImage(BMInit, -rExp.X, -rExp.Y);
Bitmap ExpInitLarge = new Bitmap(2048, 512);
using (Graphics largeGraphics = Graphics.FromImage(ExpInitLarge))
{
largeGraphics.DrawImage(ExpInitCrop, 513, 190);
}
Rectangle rRef = new Rectangle(1, 484, 1024, 131);
Bitmap RefInitCrop = new Bitmap(rRef.Width, rRef.Height);
Graphics h = Graphics.FromImage(RefInitCrop);
h.DrawImage(BMInit, -rRef.X, -rRef.Y);
Bitmap RefInitLarge = new Bitmap(2048, 512);
using (Graphics largeGraphics = Graphics.FromImage(RefInitLarge))
{
largeGraphics.DrawImage(RefInitCrop, 513, 190);
}
Bitmap ExpFinalCrop = new Bitmap(rExp.Width, rExp.Height);
Graphics i = Graphics.FromImage(ExpFinalCrop);
i.DrawImage(BMFinal, -rExp.X, -rExp.Y);
Bitmap ExpFinalLarge = new Bitmap(2048, 512);
using (Graphics largeGraphics = Graphics.FromImage(ExpFinalLarge))
{
largeGraphics.DrawImage(ExpFinalCrop, 513, 190);
}
Bitmap RefFinalCrop = new Bitmap(rRef.Width, rRef.Height);
Graphics j = Graphics.FromImage(RefFinalCrop);
j.DrawImage(BMFinal, -rRef.X, -rRef.Y);
Bitmap RefFinalLarge = new Bitmap(2048, 512);
using (Graphics largeGraphics = Graphics.FromImage(RefFinalLarge))
{
largeGraphics.DrawImage(RefFinalCrop, 513, 190);
}
// Grayscalling the 4 sections to get 8bppIndexed PixelFormat
Accord.Imaging.Filters.Grayscale filterGS = new Accord.Imaging.Filters.Grayscale(0.2125, 0.7154, 0.0721);
Bitmap RefFinalLargeGS = filterGS.Apply(RefFinalLarge);
Bitmap ExpFinalLargeGS = filterGS.Apply(ExpFinalLarge);
Bitmap RefInitLargeGS = filterGS.Apply(RefInitLarge);
Bitmap ExpInitLargeGS = filterGS.Apply(ExpInitLarge);
// FFT on the 4 sections
Accord.Imaging.ComplexImage ExpInitComplex = Accord.Imaging.ComplexImage.FromBitmap(ExpInitLargeGS);
ExpInitComplex.ForwardFourierTransform();
Accord.Imaging.ComplexImage RefInitComplex = Accord.Imaging.ComplexImage.FromBitmap(RefInitLargeGS);
RefInitComplex.ForwardFourierTransform();
Accord.Imaging.ComplexImage ExpFinalComplex = Accord.Imaging.ComplexImage.FromBitmap(ExpFinalLargeGS);
ExpFinalComplex.ForwardFourierTransform();
Accord.Imaging.ComplexImage RefFinalComplex = Accord.Imaging.ComplexImage.FromBitmap(RefFinalLargeGS);
RefFinalComplex.ForwardFourierTransform();
//Conjugating the ExpFinal and RefFinal section
Complex[,] CompConjExpFinal = new Complex[ExpFinalComplex.Height, ExpFinalComplex.Width];
Complex[,] CompConjRefFinal = new Complex[RefFinalComplex.Height, RefFinalComplex.Width];
for (int l = 0; l < ExpFinalComplex.Height; l++)
{
for (int m = 0; m < ExpFinalComplex.Width; m++)
{
CompConjExpFinal[l, m] = System.Numerics.Complex.Conjugate(ExpFinalComplex.Data[l, m]);
ExpFinalComplex.Data[l, m] = CompConjExpFinal[l, m];
}
}
for (int l = 0; l < RefFinalComplex.Height; l++)
{
for (int m = 0; m < RefFinalComplex.Width; m++)
{
CompConjRefFinal[l, m] = System.Numerics.Complex.Conjugate(RefFinalComplex.Data[l, m]);
RefFinalComplex.Data[l, m] = CompConjRefFinal[l, m];
}
}
//Element-wise multiplication of the complex arrays two by two
Complex[,] ExpMultipliedMatrix = new Complex[ExpFinalComplex.Height, ExpFinalComplex.Width];
Complex[,] RefMultipliedMatrix = new Complex[RefFinalComplex.Height, RefFinalComplex.Width];
for (int l = 0; l < ExpFinalComplex.Height; l++)
{
for (int m = 0; m < ExpFinalComplex.Width; m++)
{
ExpMultipliedMatrix[l, m] = System.Numerics.Complex.Multiply(ExpInitComplex.Data[l, m], ExpFinalComplex.Data[l, m]);
RefMultipliedMatrix[l, m] = System.Numerics.Complex.Multiply(RefInitComplex.Data[l, m], RefFinalComplex.Data[l, m]);
}
}
//InverseFFT
Complex[,] CrossCorrExpMatrix = new Complex[ExpFinalComplex.Height, ExpFinalComplex.Width];
Complex[,] CrossCorrRefMatrix = new Complex[RefFinalComplex.Height, RefFinalComplex.Width];
Accord.Math.FourierTransform.FFT2(ExpMultipliedMatrix, FourierTransform.Direction.Backward);
Accord.Math.FourierTransform.FFT2(RefMultipliedMatrix, FourierTransform.Direction.Backward);
CrossCorrExpMatrix = ExpMultipliedMatrix;
CrossCorrRefMatrix = RefMultipliedMatrix;
}
Thanks a lot !
The imaginary part of the result should be 0 (or within numerical error). To find the shift you should be looking at the location of the peak of the correlation's amplitude (but unless you've got one is the negative image of the other, that's likely to correspond to the peak of the correlation's real part). The main thing to be careful about: since you centered both images, an extra shift (of half the image size) will be introduced.
As for viewing the graph, you could fairly easily map the result to a grayscale image and view it with your favorite image viewer.
I want to save image as Format8bppIndexed using this code :
Bitmap imgsource = new Bitmap(sourceimage);
Bitmap imgtarget = new Bitmap(imgsource.Width, imgsource.Height, PixelFormat.Format8bppIndexed);
for (int I = 0; I <= imgsource.Width - 1; I++)
{
for (int J = 0; J <= imgsource.Height - 1; J++)
{
imgtarget.SetPixel(I, J, imgsource.GetPixel(I, J));
}
}
imgtarget.Save(targetimage);
but I face error that "Setpixel is not supported for images with indexed pixel formats"
and I want to save image with indexed
how I can do that ?
Use this instead:
Bitmap imgtarget = imgsource.Clone(
new Rectangle(0, 0, imgsource.Width, imgsource.Height),
PixelFormat.Format8bppIndexed);
EDIT:
There are two kind of Images in GDI+:
Bitmaps and Metafiles. Usually you load the image from a bitmap image file (.jpg, .png, .bmp, .gif, .exif and .tiff) and not a metafile (.wmf or .emf). So, instead of creating a new bitmap based on the image, just cast the Image object to Bitmap:
Bitmap imgsource = (Bitmap)sourceimage;
The first line of your code, changes the origianl properties of the image and resets the DIP to 96.
I have an image that looks like this:
and I want to find the edges of the dark part so like this (the red lines are what I am looking for):
I have tried a few approaches and none have worked so I am hoping there is an emgu guru out there willing to help me...
Approach 1
Convert the image to grayscale
Remove noise and invert
Remove anything that is not really bright
Get the canny and the polygons
Code for this (I know that I should be disposing of things properly but I am keeping the code short):
var orig = new Image<Bgr, byte>(inFile);
var contours = orig
.Convert<Gray, byte>()
.PyrDown()
.PyrUp()
.Not()
.InRange(new Gray(190), new Gray(255))
.Canny(new Gray(190), new Gray(255))
.FindContours(CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_TREE);
var output = new Image<Gray, byte>(orig.Size);
for (; contours != null; contours = contours.HNext)
{
var poly = contours.ApproxPoly(contours.Perimeter*0.05,
contours.Storage);
output.Draw(poly, new Gray(255), 1);
}
output.Save(outFile);
This is the result:
Approach 2
Convert the image to grayscale
Remove noise and invert
Remove anything that is not really bright
Get the canny and then lines
Code for this:
var orig = new Image<Bgr, byte>(inFile);
var linesegs = orig
.Convert<Gray, byte>()
.PyrDown()
.PyrUp()
.Not()
.InRange(new Gray(190), new Gray(255))
.Canny(new Gray(190), new Gray(255))
.HoughLinesBinary(
1,
Math.PI/45.0,
20,
30,
10
)[0];
var output = new Image<Gray, byte>(orig.Size);
foreach (var l in linesegs)
{
output.Draw(l, new Gray(255), 1);
}
output.Save(outFile);
This is the result:
Notes
I have tried adjusting all the parameters on those two approaches and adding smoothing but I can never get the simple edges that I need because, I suppose, the darker region is not a solid colour.
I have also tried dilating and eroding but the parameters I have to put in for those are so high to get a single colour that I end up including some of the grey stuff on the right and lose accuracy.
Yes, it's possible, and here is how you could do it:
Change the contrast of the image to make the lighter part disappear:
Then, convert it to HSV to perform a threshold operation on the Saturation channel:
And execute erode & dilate operations to get rid of the noises:
At this point you'll have the result you were looking for. For testing purposes, at the end I execute the bounding box technique to show how to detect the beggining and the end of the area of interest:
I didn't have the time to tweak the parameters and make a perfect detection, but I'm sure you can figure it out. This answer provides a roadmap for achieving that!
This is the C++ code I came up with, I trust you are capable of converting it to C#:
#include <cv.h>
#include <highgui.h>
int main(int argc, char* argv[])
{
cv::Mat image = cv::imread(argv[1]);
cv::Mat new_image = cv::Mat::zeros(image.size(), image.type());
/* Change contrast: new_image(i,j) = alpha*image(i,j) + beta */
double alpha = 1.8; // [1.0-3.0]
int beta = 100; // [0-100]
for (int y = 0; y < image.rows; y++)
{
for (int x = 0; x < image.cols; x++)
{
for (int c = 0; c < 3; c++)
{
new_image.at<cv::Vec3b>(y,x)[c] =
cv::saturate_cast<uchar>(alpha * (image.at<cv::Vec3b>(y,x)[c]) + beta);
}
}
}
cv::imshow("contrast", new_image);
/* Convert RGB Mat into HSV color space */
cv::Mat hsv;
cv::cvtColor(new_image, hsv, CV_BGR2HSV);
std::vector<cv::Mat> v;
cv::split(hsv,v);
// Perform threshold on the S channel of hSv
int thres = 15;
cv::threshold(v[1], v[1], thres, 255, cv::THRESH_BINARY_INV);
cv::imshow("saturation", v[1]);
/* Erode & Dilate */
int erosion_size = 6;
cv::Mat element = cv::getStructuringElement(cv::MORPH_CROSS,
cv::Size(2 * erosion_size + 1, 2 * erosion_size + 1),
cv::Point(erosion_size, erosion_size) );
cv::erode(v[1], v[1], element);
cv::dilate(v[1], v[1], element);
cv::imshow("binary", v[1]);
/* Bounding box */
// Invert colors
cv::bitwise_not(v[1], v[1]);
// Store the set of points in the image before assembling the bounding box
std::vector<cv::Point> points;
cv::Mat_<uchar>::iterator it = v[1].begin<uchar>();
cv::Mat_<uchar>::iterator end = v[1].end<uchar>();
for (; it != end; ++it)
{
if (*it) points.push_back(it.pos());
}
// Compute minimal bounding box
cv::RotatedRect box = cv::minAreaRect(cv::Mat(points));
// Draw bounding box in the original image (debug purposes)
cv::Point2f vertices[4];
box.points(vertices);
for (int i = 0; i < 4; ++i)
{
cv::line(image, vertices[i], vertices[(i + 1) % 4], cv::Scalar(0, 255, 0), 2, CV_AA);
}
cv::imshow("box", image);
cvWaitKey(0);
return 0;
}
How can I fill the holes in binary image in emgu cv?
In Aforge.net it's easy, use Fillholes class.
Thought the question is a little bit old, I'd like to contribute an alternative solution to the problem.
You can obtain the same result as Chris' without memory problem if you use the following:
private Image<Gray,byte> FillHoles(Image<Gray,byte> image)
{
var resultImage = image.CopyBlank();
Gray gray = new Gray(255);
using (var mem = new MemStorage())
{
for (var contour = image.FindContours(
CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_CCOMP,
mem); contour!= null; contour = contour.HNext)
{
resultImage.Draw(contour, gray, -1);
}
}
return resultImage;
}
The good thing about the method above is that you can selectively fill holes that meets your criteria. For example, you may want to fill holes whose pixel count (count of black pixels inside the blob) is below 50, etc.
private Image<Gray,byte> FillHoles(Image<Gray,byte> image, int minArea, int maxArea)
{
var resultImage = image.CopyBlank();
Gray gray = new Gray(255);
using (var mem = new MemStorage())
{
for (var contour = image.FindContours(
CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_SIMPLE,
RETR_TYPE.CV_RETR_CCOMP,
mem); contour!= null; contour = contour.HNext)
{
if ( (contour.Area < maxArea) && (contour.Area > minArea) )
resultImage.Draw(contour, gray, -1);
}
}
return resultImage;
}
Yes there is a method but it's a bit messy as its based on cvFloodFill operation. Now all this algorithm is designed to do is fill an area with a colour until it reaches an edge similar to a region growing algorithm. To use this effectively you need to use a little inventive coding but I warn you this code is only to get you started it may require re-factoring to speed things up . As it stands the loop goes through each of your pixels that are less then 255 applies cvFloodFill checks what size the area is and then if it is under a certain area fill it in.
It is important to note that a copy of the image is made of the original image to be supplied to the cvFloodFill operation as a pointer is used. If the direct image is supplied then you will end up with a white image.
OpenFileDialog OpenFile = new OpenFileDialog();
if (OpenFileDialog.ShowDialog() == DialogResult.OK)
{
Image<Bgr, byte> image = new Image<Bgr, byte>(OpenFile.FileName);
for (int i = 0; i < image.Width; i++)
{
for (int j = 0; j < image.Height; j++)
{
if (image.Data[j, i, 0] != 255)
{
Image<Bgr, byte> image_copy = image.Copy();
Image<Gray, byte> mask = new Image<Gray, byte>(image.Width + 2, image.Height + 2);
MCvConnectedComp comp = new MCvConnectedComp();
Point point1 = new Point(i, j);
//CvInvoke.cvFloodFill(
CvInvoke.cvFloodFill(image_copy.Ptr, point1, new MCvScalar(255, 255, 255, 255),
new MCvScalar(0, 0, 0),
new MCvScalar(0, 0, 0), out comp,
Emgu.CV.CvEnum.CONNECTIVITY.EIGHT_CONNECTED,
Emgu.CV.CvEnum.FLOODFILL_FLAG.DEFAULT, mask.Ptr);
if (comp.area < 10000)
{
image = image_copy.Copy();
}
}
}
}
}
The "new MCvScalar(0, 0, 0), new MCvScalar(0, 0, 0)," are not really important in this case as you are only filling in results of a binary image. YOu could play around with other settings to see what results you can achieve. "if (comp.area < 10000)" is the key constant to change is you want to change what size hole the method will fill.
These are the results that you can expect:
Original
Results
The problem with this method is it's extremely memory intensive and it managed to eat up 6GB of ram on a 200x200 image and when I tried 200x300 it ate all 8GB of my RAM and brought everything to a crashing halt. Unless a majority of your image is white and you want to fill in tiny gaps or you can minimise where you apply the method I would avoid it. I would suggest writing you own class to examine each pixel that is not 255 and add the number of pixels surrounding it. You can then record the position of each pixel that was not 255 (in a simple list) and if your count was bellow a threshold set these positions to 255 in your images (by iterating though the list).
I would stick with the Aforge FillHoles class if you do not wish to write your own as it is designed for this purpose.
Cheers
Chris
you can use FillConvexPoly
image.FillConvexPoly(externalContours.ToArray(), new Gray(255));