EMGU/OpenCV FFT of image not yielding expected results - c#

I'm trying to visualize the FFT of an image with EMGU. Here's the image I'm processing:
Here's the expected result:
Here's what I get:
Here's my code:
Image<Gray, float> image = new Image<Gray, float>(#"C:\Users\me\Desktop\sample3.jpg");
IntPtr complexImage = CvInvoke.cvCreateImage(image.Size, Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_32F, 2);
CvInvoke.cvSetZero(complexImage);
CvInvoke.cvSetImageCOI(complexImage, 1);
CvInvoke.cvCopy(image, complexImage, IntPtr.Zero);
CvInvoke.cvSetImageCOI(complexImage, 0);
Matrix<float> dft = new Matrix<float>(image.Rows, image.Cols, 2);
CvInvoke.cvDFT(complexImage, dft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, 0);
Matrix<float> outReal = new Matrix<float>(image.Size);
Matrix<float> outIm = new Matrix<float>(image.Size);
CvInvoke.cvSplit(dft, outReal, outIm, IntPtr.Zero, IntPtr.Zero);
Image<Gray, float> fftImage = new Image<Gray, float>(outReal.Size);
CvInvoke.cvCopy(outReal, fftImage, IntPtr.Zero);
pictureBox1.Image = image.ToBitmap();
pictureBox2.Image = fftImage.Log().ToBitmap();
What mistake am I making here?
Update: as per Roger Rowland's suggestion here's my updated code. The result looks better but I'm not 100% sure it's correct. Here's the result:
Image<Gray, float> image = new Image<Gray, float>(#"C:\Users\yytov\Desktop\sample3.jpg");
IntPtr complexImage = CvInvoke.cvCreateImage(image.Size, Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_32F, 2);
CvInvoke.cvSetZero(complexImage); // Initialize all elements to Zero
CvInvoke.cvSetImageCOI(complexImage, 1);
CvInvoke.cvCopy(image, complexImage, IntPtr.Zero);
CvInvoke.cvSetImageCOI(complexImage, 0);
Matrix<float> dft = new Matrix<float>(image.Rows, image.Cols, 2);
CvInvoke.cvDFT(complexImage, dft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, 0);
//The Real part of the Fourier Transform
Matrix<float> outReal = new Matrix<float>(image.Size);
//The imaginary part of the Fourier Transform
Matrix<float> outIm = new Matrix<float>(image.Size);
CvInvoke.cvSplit(dft, outReal, outIm, IntPtr.Zero, IntPtr.Zero);
CvInvoke.cvPow(outReal, outReal, 2.0);
CvInvoke.cvPow(outIm, outIm, 2.0);
CvInvoke.cvAdd(outReal, outIm, outReal, IntPtr.Zero);
CvInvoke.cvPow(outReal, outReal, 0.5);
CvInvoke.cvAddS(outReal, new MCvScalar(1.0), outReal, IntPtr.Zero); // 1 + Mag
CvInvoke.cvLog(outReal, outReal); // log(1 + Mag)
// Swap quadrants
int cx = outReal.Cols / 2;
int cy = outReal.Rows / 2;
Matrix<float> q0 = outReal.GetSubRect(new Rectangle(0, 0, cx, cy));
Matrix<float> q1 = outReal.GetSubRect(new Rectangle(cx, 0, cx, cy));
Matrix<float> q2 = outReal.GetSubRect(new Rectangle(0, cy, cx, cy));
Matrix<float> q3 = outReal.GetSubRect(new Rectangle(cx, cy, cx, cy));
Matrix<float> tmp = new Matrix<float>(q0.Size);
q0.CopyTo(tmp);
q3.CopyTo(q0);
tmp.CopyTo(q3);
q1.CopyTo(tmp);
q2.CopyTo(q1);
tmp.CopyTo(q2);
CvInvoke.cvNormalize(outReal, outReal, 0.0, 255.0, Emgu.CV.CvEnum.NORM_TYPE.CV_MINMAX, IntPtr.Zero);
Image<Gray, float> fftImage = new Image<Gray, float>(outReal.Size);
CvInvoke.cvCopy(outReal, fftImage, IntPtr.Zero);
pictureBox1.Image = image.ToBitmap();
pictureBox2.Image = fftImage.ToBitmap();

I cannot comment on the magnitude/intensity of the resulting image, but I can give you a tip about the spatial distribution of points in your image.
OpenCV doesn't rearrange the quadrants to put origin [0,0] into center of image. You have to rearrange the quadrants manually.
Look at step 6 at the following page:
http://docs.opencv.org/doc/tutorials/core/discrete_fourier_transform/discrete_fourier_transform.html
It's official doc for OpenCV, so it's in C++, but principle holds.

Related

Emgu Split Object

May I ask, how can I separate the detected object in a contour?
below is my source code
Image<Gray, byte> imgOutput = imgInput.Convert<Gray, byte>().ThresholdBinary(new Gray(100), new Gray(255)); Emgu.CV.Util.VectorOfVectorOfPoint contours = new VectorOfVectorOfPoint(); Mat m = new Mat();
//Image<Gray, byte> imgOut = new Image<Gray, byte>(imgInput.Width, imgInput.Height, new Gray(0));
CvInvoke.FindContours(imgOutput, contours, m, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
if (contours.Size > 0)
{
double perimeter = CvInvoke.ArcLength(contours[1], true);
VectorOfPoint approx = new VectorOfPoint();
CvInvoke.ApproxPolyDP(contours[1], approx, 0.04 * perimeter, true);
CvInvoke.DrawContours(imgInput, contours, 1, new MCvScalar(0, 0, 255), 2);
pictureBox2.Image = imgInput.Bitmap;
}
Separated objects result.

Convert SharpDX.Direct3D11 Texture2D to System.Drawing Bitmap C#

I received a code from a friend for study purposes, in this code I get a Bitmap through Bitblt, after getting this Bitmap the code converts to Texture2D (SharpDX.Direct3D11 Texture2D), after using this Texture2D I would like to convert it back to Bitmap, what is the best way to do this? For having little experience in this matter I found myself stuck in this problem, below the code of the conversion.
public Texture2D TextureGenerator(Device device)
{
var hdcSrc = NativeMethods.GetDCEx(_hWnd, IntPtr.Zero, DeviceContextValues.Window | DeviceContextValues.Cache | DeviceContextValues.LockWindowUpdate);
var hdcDest = NativeMethods.CreateCompatibleDC(hdcSrc);
NativeMethods.GetWindowRect(_hWnd, out var rect);
var (width, height) = (rect.Right - rect.Left, rect.Bottom - rect.Top);
var hBitmap = NativeMethods.CreateCompatibleBitmap(hdcSrc, width, height);
var hOld = NativeMethods.SelectObject(hdcDest, hBitmap);
NativeMethods.BitBlt(hdcDest, 0, 0, width, height, hdcSrc, 0, 0, TernaryRasterOperations.SRCCOPY);
NativeMethods.SelectObject(hdcDest, hOld);
NativeMethods.DeleteDC(hdcDest);
NativeMethods.ReleaseDC(_hWnd, hdcSrc);
using var img = Image.FromHbitmap(hBitmap);
NativeMethods.DeleteObject(hBitmap);
using Bitmap bitmap = img.Clone(Rectangle.FromLTRB(0, 0, width, height), PixelFormat.Format32bppArgb);
var bits = bitmap.LockBits(Rectangle.FromLTRB(0, 0, width, height), ImageLockMode.ReadOnly, img.PixelFormat);
DataBox data = new DataBox { DataPointer = bits.Scan0, RowPitch = bits.Width * 4, SlicePitch = bits.Height };
Texture2DDescription texture2dDescription = new Texture2DDescription
{
ArraySize = 1,
BindFlags = BindFlags.ShaderResource | BindFlags.RenderTarget,
CpuAccessFlags = CpuAccessFlags.None,
Format = Format.B8G8R8A8_UNorm,
Height = height,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0),
Usage = ResourceUsage.Default,
Width = width
};
Texture2D texture2d = new Texture2D(device, texture2dDescription, new[] { data });
bitmap.UnlockBits(bits);
return texture2d;
}
I tried looking in other topics on StackOverflow but I didn't find this type of conversion.

GDI hardware acceleration

So I've heard that GDI supports hardware acceleration. I have this code here:
var x = 1;
if (solidBrush == IntPtr.Zero)
{
solidBrush = CreateSolidBrush(ColorTranslator.ToWin32(Color.FromArgb(120, 120, 120)));
hDC = CreateGraphics().GetHdc();
}
index += x;
int w = x;
int h = Height;
//create memory device context
var memdc = CreateCompatibleDC(hDC);
//create bitmap
var hbitmap = CreateCompatibleBitmap(hDC, index, h);
////select bitmap in to memory device context
var holdbmp = SelectObject(memdc, hbitmap);
RECT rect = new RECT(new Rectangle(0, 0, w, h));
FillRect(memdc, ref rect, solidBrush);
AlphaBlend(hDC, index - x, 0, w, h, memdc, 0, 0, w, h, new BLENDFUNCTION(0, 0, 128, 0));
SelectObject(memdc, holdbmp);
DeleteObject(hbitmap);
DeleteDC(memdc);
which uses GDI to draw an animation (a box), but I see no GPU usage. Is it that GDI just doesn't support HW acceleration?
Thanks.

c# Image resizing adding extra pixels

The following code behaves very oddly. It adds some extra spacing at the bottom of the image, and I can not see why. Result of code:
And this is the code that I am working with:
public static Image ReDraw(this Image main, int w, int h,
CompositingQuality quality = CompositingQuality.Default, //linear?
SmoothingMode smoothing_mode = SmoothingMode.None,
InterpolationMode ip_mode = InterpolationMode.NearestNeighbor)
{
//size
double dbl = (double)main.Width / (double)main.Height;
//preserve size ratio
if ((int)((double)h * dbl) <= w)
w = (int)((double)h * dbl);
else
h = (int)((double)w / dbl);
//draw
Image newImage = new System.Drawing.Bitmap(w, h);
Graphics thumbGraph = Graphics.FromImage(newImage);
thumbGraph.CompositingQuality = quality;
thumbGraph.SmoothingMode = smoothing_mode;
thumbGraph.InterpolationMode = ip_mode;
thumbGraph.Clear(Color.Transparent);
thumbGraph.DrawImage(main, 0, 0, w, h);
thumbGraph.DrawImage(main,
new System.Drawing.Rectangle(0, 0, w, h),
new System.Drawing.Rectangle(0, 0, main.Width, main.Height),
System.Drawing.GraphicsUnit.Pixel);
return newImage;
}
thumbGraph.PixelOffsetMode = System.Drawing.Drawing2D.PixelOffsetMode.HighQuality;

How do I compute DFT and its reverse with EMGU?

How can I compute the DFT of an image (using EMGU), display it and then compute the reverse to get back to the original?
I'm going to answer my own question here since it took me a while to figure out.
To test that it works here's an image
and here's the expected result after applying DFT.
And without further ado here's the code:
// Load image
Image<Gray, float> image = new Image<Gray, float>(#"C:\Users\me\Desktop\lines.png");
// Transform 1 channel grayscale image into 2 channel image
IntPtr complexImage = CvInvoke.cvCreateImage(image.Size, Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_32F, 2);
CvInvoke.cvSetImageCOI(complexImage, 1); // Select the channel to copy into
CvInvoke.cvCopy(image, complexImage, IntPtr.Zero);
CvInvoke.cvSetImageCOI(complexImage, 0); // Select all channels
// This will hold the DFT data
Matrix<float> forwardDft = new Matrix<float>(image.Rows, image.Cols, 2);
CvInvoke.cvDFT(complexImage, forwardDft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, 0);
CvInvoke.cvReleaseImage(ref complexImage);
// We'll display the magnitude
Matrix<float> forwardDftMagnitude = GetDftMagnitude(forwardDft);
SwitchQuadrants(ref forwardDftMagnitude);
// Now compute the inverse to see if we can get back the original
Matrix<float> reverseDft = new Matrix<float>(forwardDft.Rows, forwardDft.Cols, 2);
CvInvoke.cvDFT(forwardDft, reverseDft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_INV_SCALE, 0);
Matrix<float> reverseDftMagnitude = GetDftMagnitude(reverseDft);
pictureBox1.Image = image.ToBitmap();
pictureBox2.Image = Matrix2Bitmap(forwardDftMagnitude);
pictureBox3.Image = Matrix2Bitmap(reverseDftMagnitude);
private Bitmap Matrix2Bitmap(Matrix<float> matrix)
{
CvInvoke.cvNormalize(matrix, matrix, 0.0, 255.0, Emgu.CV.CvEnum.NORM_TYPE.CV_MINMAX, IntPtr.Zero);
Image<Gray, float> image = new Image<Gray, float>(matrix.Size);
matrix.CopyTo(image);
return image.ToBitmap();
}
// Real part is magnitude, imaginary is phase.
// Here we compute log(sqrt(Re^2 + Im^2) + 1) to get the magnitude and
// rescale it so everything is visible
private Matrix<float> GetDftMagnitude(Matrix<float> fftData)
{
//The Real part of the Fourier Transform
Matrix<float> outReal = new Matrix<float>(fftData.Size);
//The imaginary part of the Fourier Transform
Matrix<float> outIm = new Matrix<float>(fftData.Size);
CvInvoke.cvSplit(fftData, outReal, outIm, IntPtr.Zero, IntPtr.Zero);
CvInvoke.cvPow(outReal, outReal, 2.0);
CvInvoke.cvPow(outIm, outIm, 2.0);
CvInvoke.cvAdd(outReal, outIm, outReal, IntPtr.Zero);
CvInvoke.cvPow(outReal, outReal, 0.5);
CvInvoke.cvAddS(outReal, new MCvScalar(1.0), outReal, IntPtr.Zero); // 1 + Mag
CvInvoke.cvLog(outReal, outReal); // log(1 + Mag)
return outReal;
}
// We have to switch quadrants so that the origin is at the image center
private void SwitchQuadrants(ref Matrix<float> matrix)
{
int cx = matrix.Cols / 2;
int cy = matrix.Rows / 2;
Matrix<float> q0 = matrix.GetSubRect(new Rectangle(0, 0, cx, cy));
Matrix<float> q1 = matrix.GetSubRect(new Rectangle(cx, 0, cx, cy));
Matrix<float> q2 = matrix.GetSubRect(new Rectangle(0, cy, cx, cy));
Matrix<float> q3 = matrix.GetSubRect(new Rectangle(cx, cy, cx, cy));
Matrix<float> tmp = new Matrix<float>(q0.Size);
q0.CopyTo(tmp);
q3.CopyTo(q0);
tmp.CopyTo(q3);
q1.CopyTo(tmp);
q2.CopyTo(q1);
tmp.CopyTo(q2);
}
Most of the information in this answer is from a question on the OpenCV mailing list and Steve Eddins' article on FFT in image processing.

Categories

Resources