I want very basically to run a Fourier transformation on an image, using the Emgu.CV (4.4) library.
For the none-Cuda code, it seams to do something, going into the right direction with this code:
using Emgu.CV;
using Emgu.CV.Structure;
using Emgu.CV.Cuda;
private static void StartDefault()
{
Image<Gray, float> image = new Image<Gray, float>(#"C:\temp\emgucv\original_4096.jpg");
var forward = new Image<Gray, float>(image.Rows, image.Cols);
var inverse = new Image<Gray, float>(image.Rows, image.Cols);
CvInvoke.Dft(image, forward, Emgu.CV.CvEnum.DxtType.Forward, image.Rows);
CvInvoke.Dft(forward, inverse, Emgu.CV.CvEnum.DxtType.Inverse, image.Rows);
forward.Save(#"C:\temp\emgucv\original_4096_dft_fw.jpg");
inverse.Save(#"C:\temp\emgucv\original_4096_dft_in.jpg");
}
Now I tried to run the same using Cuda (in this case only forward to keep things simple).
private static void Start1Cuda()
{
Image<Gray, float> image = new Image<Gray, float>(#"C:\trash\emgucv\original_128.jpg");
CudaImage<Gray, float> cudaImage = new CudaImage<Gray, float>(image.Height, image.Width);
cudaImage.Upload(image);
// channels 2 here, else it crashes on .Dft with "is_complex_input || is_complex_output"
GpuMat resultMatForward = new GpuMat(cudaImage.Size.Height, cudaImage.Size.Width, cudaImage.Depth, channels: 2, true);
CudaInvoke.Dft(image, resultMatForward, image.Size, Emgu.CV.CvEnum.DxtType.Forward);
Image<Gray, float> forward = new Image<Gray, float>(image.Height, image.Width);
resultMatForward.Download(forward);
forward.Save(#"C:\trash\emgucv\original_128_dft.jpg");
}
But this crashes at the very last line, trying to save the result. Exeption: Emgu.CV.Util.CvException: 'OpenCV: image.channels() == 1 || image.channels() == 3 || image.channels() == 4'
As mentioned in comment, and only thing I see here as an issue, is the number of channels.
What has to be done, to avoid the exception and get the dft transformated image saved.
Related
I'm very new to Emgucv, so need a little help?
The code below is mainly taken from various places from Google. It will take a jpg file (which has a green background) and allow, from a separate form to change the values of h1 and h2 settings so as to create (reveal) a mask.
Now what I want to be able to do with this mask is to turn it transparent.
At the moment it will just display a black background around a person (for example), and then saves to file.
I need to know how to turn the black background transparent, if this is the correct way to approach this?
Thanks in advance.
What I have so far is in C# :
imgInput = new Image<Bgr, byte>(FileName);
Image<Hsv, Byte> hsvimg = imgInput.Convert<Hsv, Byte>();
//extract the hue and value channels
Image<Gray, Byte>[] channels = hsvimg.Split(); // split into components
Image<Gray, Byte> imghue = channels[0]; // hsv, so channels[0] is hue.
Image<Gray, Byte> imgval = channels[2]; // hsv, so channels[2] is value.
//filter out all but "the color you want"...seems to be 0 to 128 (64, 72) ?
Image<Gray, Byte> huefilter = imghue.InRange(new Gray(h1), new Gray(h2));
// TURN IT TRANSPARENT somewhere around here?
pictureBox2.Image = imgInput.Copy(mask).Bitmap;
imgInput.Copy(mask).Save("changedImage.png");
I am not sure I really understand what you are trying to do. But a mask is a binary object. A mask is usually black for what you do not want and white for what you do. As far as I know, there is no transparent mask, as to me that makes no sense. Masks are used to extract parts of an image by masking out the rest.
Maybe you could elaborate on what it is you want to do?
Doug
I think I may have the solution I was looking for. I found some code on stackoverflow which I've tweaked a little :
public Image<Bgra, Byte> MakeTransparent(Image<Bgr, Byte> image, double r1, double r2)
{
Mat imageMat = image.Mat;
Mat finalMat = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 4);
Mat tmp = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 1);
Mat alpha = new Mat(imageMat.Rows, imageMat.Cols, DepthType.Cv8U, 1);
CvInvoke.CvtColor(imageMat, tmp, ColorConversion.Bgr2Gray);
CvInvoke.Threshold(tmp, alpha, (int)r1, (int)r2, ThresholdType.Binary);
VectorOfMat rgb = new VectorOfMat(3);
CvInvoke.Split(imageMat, rgb);
Mat[] rgba = { rgb[0], rgb[1], rgb[2], alpha };
VectorOfMat vector = new VectorOfMat(rgba);
CvInvoke.Merge(vector, finalMat);
return finalMat.ToImage<Bgra, Byte>();
}
I'm now looking at adding SmoothGaussian to the mask to create a kind on blend, where the two images are layered, rather than a sharp cut-out.
We are performing image sharpening of a gray scale image of type Image by subtracting the Laplacian of the image from the original image. The result, if saved as a JPEG, has well defined edges and contrast. However, if the resultant image is converted to Bitmap OR "Image<Gray, Byte>"
and saved as JPEG, the intensity is reduced and the sharpening effect is lost. I suspected that converting to Bitmap may be causing this problem. So, I saved some of the intermediate images and also converted the image to "Image<Gray,Byte>". This did not help. I also tried to scale the image using a simple method. This too did not help.
The above behaviour is also true when we perform Laplace and subtract the resultant image from the original image. Illustrations are below (code has been modified for simplicity):
...
Image<Gray, Byte> sharpenedImage = Sharpen(filter, originalprocessedImage);
ProcessedImage = sharpenedImage.ToBitmap(); // Or ProcessedImage.Bitmap;
ProcessedImage.Save("ProcessedImage.jpg"); // results in intensity loss
...
public Image<Gray, Byte> Sharpen(Image<Gray, Byte> inputFrame)
{
ConvolutionKernelF Sharpen1Kernel = new ConvolutionKernelF (new float[,] { { -1,-1,-1 }, { -1, 8,-1 }, { -1,-1,-1 } });
Image<Gray, float> newFloatImage = inputFrame.Convert<Gray, float>();
Image<Gray, float> newConvolutedImage = newFloatImage.Convolution(Sharpen1Kernel);
Image<Gray, float> convolutedScaledShiftedImage = newFloatImage.AddWeighted(newConvolutedImage, 1.0, 1.0, 0);
// added for testing
convolutedScaledShiftedImage .Save("ConvolutedScaledShiftedImage .jpg");
//Now try to scale and save:
Image<Gray, float> scaledImageFloat = convolutedScaledAddedImage.Clone();
Image<Gray, float> scaledImageFloat2 = ScaleImage(scaledImageFloat);
// added for testing
scaledImageFloat.Save("ScaledImage.jpg");
// added for testing
scaledImageFloat2.Convert<Gray,Byte>().Save("ScaledImage-8Bits.jpg");
// both of these return the images of lower intensity
return scaledImageFloat2.Convert<Gray,Byte>();
return convolutedScaledShiftedImage.Convert<gray,Byte>();
}
While the ConvolutedScaledShifteImage.jpeg is brighter and with better contrast, "ScaledImage.jpeg" and "ScaledImage-8Bits.jpeg" have lost the intensity levels as compared to ConvolutedScaledShifteImage.jpeg. The same is true for ProcessedImage.jpeg.
The ScaleImage is below. This was not really necessary. As the Convert was losing intensity, I tried to do the conversion and check:
Image<Gray, float> ScaleImage(Image<Gray, float> inputImage)
{
double[] minValue;
double[] maxValue;
Point[] minLocation;
Point[] maxLocation;
Image<Gray, float> scaledImage = inputImage.Clone();
scaledImage.MinMax(out minValue, out maxValue, out minLocation, out maxLocation);
double midValue = (minValue[0] + maxValue[0] ) / 2;
double rangeValue = (maxValue[0]) - (minValue[0]);
double scaleFactor = 1 / rangeValue;
double shiftFactor = midValue;
Image<Gray, float> scaledImage1 = scaledImage.ConvertScale<float>(1.0, Math.Abs(minValue[0]));
Image<Gray, float> scaledImage2 = scaledImage1.ConvertScale<float>(scaleFactor * 255, 0);
return scaledImage2;
}
Would anybody be able to suggest what could be going wrong and why the intensities are lost in the above operations? Thanks.
Edit: fixed the formatting issue... conversion was from Image<Gray, float> to Image<Gray, Byte>
Edit 12-Jan: I further dug into OpenCV code and as I understand, when you save an image of type Image<Gray,float> to JPEG, imwrite() first converts the image to an 8-bit image image.convertTo( )temp, CV_8U ); and writes to the file. When the same operation is performed with Convert<Gray,Byte>(), the intensities are not the same. So, It is not clear what is the difference between the two.
Hello Dear Forum Members !
I am working on a project to detect change view from security camera. I mean, when someone try to move camera (some kind of sabotage...) I have to notice this. My idea is:
capture images from camera every 10 sec and compare this two pictures ( Old and actual picture).
There are almost 70 cameras which I need to control, so I can't use live streaming because it could occupy my internet connection. I use Emgu CV library to make this task, but during my work I stuck with some problem.. Here im piece of my code what I prepared:
public class EmguCV
{
static public Model Test(string BaseImagePath, string ActualImagePath)
{
double noise = 0;
Mat curr64f = new Mat();
Mat prev64f = new Mat();
Mat hann = new Mat();
Mat src1 = CvInvoke.Imread(BaseImagePath, 0);
Mat src2 = CvInvoke.Imread(ActualImagePath, 0);
Size size = new Size(50, 50);
src1.ConvertTo(prev64f, Emgu.CV.CvEnum.DepthType.Cv64F);
src2.ConvertTo(curr64f, Emgu.CV.CvEnum.DepthType.Cv64F);
CvInvoke.CreateHanningWindow(hann, src1.Size, Emgu.CV.CvEnum.DepthType.Cv64F);
MCvPoint2D64f shift = CvInvoke.PhaseCorrelate(curr64f, prev64f, hann, out noise );
double value = noise ;
double radius = Math.Sqrt(shift.X * shift.X + shift.Y * shift.Y);
Model Test = new Model() { osX = shift.X, osY = shift.Y, noise = value };
return Test;
}
}
Therefore, I have two questions:
How to convert Bitmap to Mat structure.
At the moment I read my images to compare from disc according to file path. But I would like to send to compare collection of bitmaps without saving on my hard drive.
Do you know any another way to detect shift between two pictures ?. I would be really grateful for any other suggestion in this area.
Regards,
Mariusz
I know its very late to answer this but today I was looking for this problem in the internet and I found something like this:
Bitmap bitmap; //This is your bitmap
Image<Bgr, byte> imageCV = new Image<Bgr, byte>(bitmap); //Image Class from Emgu.CV
Mat mat = imageCV.Mat; //This is your Image converted to Mat
Hello I'm trying to apply point tracking to a scene.
Now I want to get the points only moving in horizontally. Anyone have any thoughts on this?
The arrays "Actual" and "nextfeature" contain the relevant x,y coordinates. I tried to get the difference from the two arrays, it did not work. I tried to get the optical flow using Farneback but it didn't gave me a satisfying result. I would really appreciate if anyone can give me any thoughts on how to get the points only moving in horizontal line.
thanks.
Here is the code.
private void ProcessFrame(object sender, EventArgs arg)
{
PointF[][] Actual = new PointF[0][];
if (Frame == null)
{
Frame = _capture.RetrieveBgrFrame();
Previous_Frame = Frame.Copy();
}
else
{
Image<Gray, byte> grayf = Frame.Convert<Gray, Byte>();
Actual = grayf.GoodFeaturesToTrack(300, 0.01d, 0.01d, 5);
Image<Gray, byte> frame1 = Frame.Convert<Gray, Byte>();
Image<Gray, byte> prev = Previous_Frame.Convert<Gray, Byte>();
Image<Gray, float> velx = new Image<Gray, float>(Frame.Size);
Image<Gray, float> vely = new Image<Gray, float>(Previous_Frame.Size);
Frame = _capture.RetrieveBgrFrame().Resize(300,300,Emgu.CV.CvEnum.INTER.CV_INTER_AREA);
Byte []status;
Single[] trer;
PointF[][] feature = Actual;
PointF[] nextFeature = new PointF[300];
Image<Gray, Byte> buf1 = new Image<Gray, Byte>(Frame.Size);
Image<Gray, Byte> buf2 = new Image<Gray, Byte>(Frame.Size);
opticalFlowFrame = new Image<Bgr, Byte>(prev.Size);
Image<Bgr, Byte> FlowFrame = new Image<Bgr, Byte>(prev.Size);
OpticalFlow.PyrLK(prev, frame1, Actual[0], new System.Drawing.Size(10, 10), 0, new MCvTermCriteria(20, 0.03d),
out nextFeature, out status, out trer);
for (int x = 0; x < Actual[0].Length ; x++)
{
opticalFlowFrame.Draw(new CircleF(new PointF(nextFeature[x].X, nextFeature[x].Y), 1f), new Bgr(Color.Blue), 2);
}
new1 = old;
old = nextFeature;
Actual[0] = nextFeature;
Previous_Frame = Frame.Copy();
captureImageBox.Image = Frame;
grayscaleImageBox.Image = opticalFlowFrame;
//cannyImageBox.Image = velx;
//smoothedGrayscaleImageBox.Image = vely;
}
}
First... I can only give you a general idea about this, not a code snippet...
Here's how you may do this:
(One of the many possible approaches of tackling this problem)
Take the zero-th frame and pass it through goodFeaturesToTrack. Collect the points in an array ...say, initialPoints.
Grab the (zero + one) -th frame. With respect to the points grabbed from step 1, run it through calcOpticalFlowPyrLK. Store the next points in another array ...say, nextPoints. Also keep track of status and error vectors.
Now, with initialPoints and nextPoints in tow, we leave the comfort of openCV and do things our way. For every feature in initialPoints and nextPoints (with status set to 1 and error below an acceptable threshold), we calculate the gradient between the points.
Accept only those point for horizontal motion whose angle of slope is either around 0 degrees or 180 degrees. Now... vector directions won't lie perfectly at 0 or 180... so take into account a bit of +/- threshold.
Repeat step 1 to 4 for all frames.
Going through the code you posted... it seems like you've almost nailed steps 1 and 2.
However, once you get the vector nextFeature, it seems like you're drawing circles around it. Interesting ...but not what we need.
Check if you can go about implementing the gradient calculation and filtering.
I am having a problem with EmguCV. I used a demo application, and edited it to my needs.
It involves the following function:
public override Image<Gray, byte> DetectSkin(Image<Bgr, byte> Img, IColor min, IColor max)
{
Image<Hsv, Byte> currentHsvFrame = Img.Convert<Hsv, Byte>();
Image<Gray, byte> skin = new Image<Gray, byte>(Img.Width, Img.Height);
skin = currentHsvFrame.InRange((Hsv)min,(Hsv)max);
return skin;
}
In the demo application, the Image comes from a video. The frame is capured from the video like this:
Image<Bgr, Byte> currentFrame;
grabber = new Emgu.CV.Capture(#".\..\..\..\M2U00253.MPG");
grabber.QueryFrame();
currentFrame = grabber.QueryFrame();
In my application, the Image comes from a microsoft kinect stream.
I use the following function:
private void SensorColorFrameReady(object sender, ColorImageFrameReadyEventArgs e)
{
using (ColorImageFrame colorFrame = e.OpenColorImageFrame())
{
if (colorFrame != null)
{
// Copy the pixel data from the image to a temporary array
colorFrame.CopyPixelDataTo(this.colorPixels);
// Write the pixel data into our bitmap
this.colorBitmap.WritePixels(
new Int32Rect(0, 0, this.colorBitmap.PixelWidth, this.colorBitmap.PixelHeight),
this.colorPixels,
this.colorBitmap.PixelWidth * sizeof(int),
0);
Bitmap b = BitmapFromWriteableBitmap(this.colorBitmap);
currentFrame = new Image<Bgr, byte>(b);
currentFrameCopy = currentFrame.Copy();
skinDetector = new YCrCbSkinDetector();
Image<Gray, Byte> skin = skinDetector.DetectSkin(currentFrame, YCrCb_min, YCrCb_max);
}
}
}
private static System.Drawing.Bitmap BitmapFromWriteableBitmap(WriteableBitmap writeBmp)
{
System.Drawing.Bitmap bmp;
using (System.IO.MemoryStream outStream = new System.IO.MemoryStream())
{
BitmapEncoder enc = new BmpBitmapEncoder();
enc.Frames.Add(BitmapFrame.Create((BitmapSource)writeBmp));
enc.Save(outStream);
bmp = new System.Drawing.Bitmap(outStream);
}
return bmp;
}
Now, the demo application works, and mine doesn't. Mine gives the following exception:
And, the image here, contains the following:
I really don't understand this exception. And, now, when I run the demo, working aplication, the image, contains:
Which is, in my eyes, exactly the same. I really don't understand this. Help is very welcome!
To make things easier I've uploaded a working WPF solution for you to the code reference sourceforge page I've been building:
http://sourceforge.net/projects/emguexample/files/Capture/Kinect_SkinDetector_WPF.zip/download
https://sourceforge.net/projects/emguexample/files/Capture/
This was designed and tested using EMGU x64 2.42 so in the Lib folder of the project you will find the referenced dlls. If you are using a different version you will need to delete the current references and replace them with the version you're using.
Secondly the project is design like all projects from the code reference library to be built from the Emgu.CV.Example folder into the ..\EMGU 2.X.X.X\bin.. global bin directory where the opencv compiled libraries are within a folder either x86 or x64.
If you struggle to get the code working I can provide all components but I hate redistributing all the opencv files that you already have so let me know if you want this.
You will need to resize the Mainwindow manually to display both images as I didn't spend to much time playing with layout.
So the code...
In the form initialisation method I check for the kinect sensor and set up the eventhandlers for the frames ready. I have left the original threshold values and skinDetector type although I don't use the EMGU version I just forgot to remove it. You will need to play with the threshold values and so on.
//// Look through all sensors and start the first connected one.
//// This requires that a Kinect is connected at the time of app startup.
//// To make your app robust against plug/unplug,
//// it is recommended to use KinectSensorChooser provided in Microsoft.Kinect.Toolkit (See components in Toolkit Browser).
foreach (var potentialSensor in KinectSensor.KinectSensors)
{
if (potentialSensor.Status == KinectStatus.Connected)
{
this.KS = potentialSensor;
break;
}
}
//If we have a Kinect Sensor we will set it up
if (null != KS)
{
// Turn on the color stream to receive color frames
KS.ColorStream.Enable(ColorImageFormat.RgbResolution640x480Fps30);
//Turn on the depth stream to recieve depth frames
KS.DepthStream.Enable(DepthImageFormat.Resolution640x480Fps30);
//Start the Streaming process
KS.Start();
//Create a link to a callback to deal with the frames
KS.AllFramesReady += new EventHandler<AllFramesReadyEventArgs>(KS_AllFramesReady);
//We set up a thread to process the image/disparty map from the kinect
//Why? The kinect AllFramesReady has a timeout if it has not finished the streams will simply stop
KinectBuffer = new Thread(ProcessBuffer);
hsv_min = new Hsv(0, 45, 0);
hsv_max = new Hsv(20, 255, 255);
YCrCb_min = new Ycc(0, 131, 80);
YCrCb_max = new Ycc(255, 185, 135);
detector = new AdaptiveSkinDetector(1, AdaptiveSkinDetector.MorphingMethod.NONE);
skinDetector = new YCrCbSkinDetector();
}
I always play with the kinect data in a new thread for speed but you may want to advanced this to a Background worker if you plan to do any more heavy processing so it is better managed.
The thread calls the ProcessBuffer() method you can ignore all the commented code as this is the remanence of the code used to display the depth image. Again I'm using the Marshall copy method to keep things fast but the thing to look for is the Dispatcher.BeginInvoke in WPF that allows the images to be displayed from the Kinect thread. This is required as I'm not processing on the main thread.
//This takes the byte[] array from the kinect and makes a bitmap from the colour data for us
byte[] pixeldata = new byte[CF.PixelDataLength];
CF.CopyPixelDataTo(pixeldata);
System.Drawing.Bitmap bmap = new System.Drawing.Bitmap(CF.Width, CF.Height, System.Drawing.Imaging.PixelFormat.Format32bppRgb);
BitmapData bmapdata = bmap.LockBits(new System.Drawing.Rectangle(0, 0, CF.Width, CF.Height), ImageLockMode.WriteOnly, bmap.PixelFormat);
IntPtr ptr = bmapdata.Scan0;
Marshal.Copy(pixeldata, 0, ptr, CF.PixelDataLength);
bmap.UnlockBits(bmapdata);
//display our colour frame
currentFrame = new Image<Bgr, Byte>(bmap);
Image<Gray, Byte> skin2 = skinDetector.DetectSkin(currentFrame, YCrCb_min, YCrCb_max);
ExtractContourAndHull(skin2);
DrawAndComputeFingersNum();
//Display our images using WPF Dispatcher Invoke as this is a sub thread.
Dispatcher.BeginInvoke((Action)(() =>
{
ColorImage.Source = BitmapSourceConvert.ToBitmapSource(currentFrame);
}), System.Windows.Threading.DispatcherPriority.Render, null);
Dispatcher.BeginInvoke((Action)(() =>
{
SkinImage.Source = BitmapSourceConvert.ToBitmapSource(skin2);
}), System.Windows.Threading.DispatcherPriority.Render, null);
I hope this helps I will at some point neaten up the code I uploaded,
Cheers