Currently, I am working on a real-time IRIS detection application.
I want to perform an invert operation to the frames taken from the web camera, like this:
I managed to get this line of code, but this is not giving the above results. Maybe parameters need to be changed, but I am not sure.
CvInvoke.cvThreshold(grayframeright, grayframeright, 160, 255.0, Emgu.CV.CvEnum.THRESH.CV_THRESH_BINARY_INV);
From the images above it feels that the second image is the negative of the first image(correct me if I am wrong),
The function you are using is a threshold function,i.e. will render everything as white if it falls between the specified color range and other wise it will render it as black.
To find the negative of an image you can use one of the following methods.
Taking the NOT of an image.
Image<Bgr, Byte> img2 = img1.Not();// imag1 can be a static image or your current captured frame
for more details you can refer the documentation here.
If you want to invert an image you can do the following:
Mat white = Mat::ones(grayframeright.rows, grayframeright.cols, grayframeright.type);
Mat dst = white - grayframeright;
Also note that pupil can be detected with OpenCV detector initialized with HAAR cascade for eyes that OpenCV code comes with.
Related
I've done quite a bit of digging and haven't found any questions that seem to match my exact issue. I'm also a CV noob so I've come up empty trying to work out a solution from what I've found so far.
I have a set of contours that I'm able to display (filled in) on an image via:
Mat img = CvInvoke.Imread(HttpContext.Current.Server.MapPath("IMAGELOCATION"));
VectorOfMat contours = new VectorOfMat();
..
//Various Morphological Transformations
..
CvInvoke.FindContours(maskDilate, contours, null, Emgu.CV.CvEnum.RetrType.External, Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxNone);
CvInvoke.DrawContours(img, contours,-6, new MCvScalar(255), -1);
These contours are non-rectangular and the data within them is of interest. Originally I planned on cropping out all of the contours but it seems like the better solution is just to blank out the rest of the original image except for the ROI outlined by my contours, without affecting the inner part of the contours.
This seems like a job for a mask of the contours overlaid on the original image, with something like the matrix containing the original image blanked to zeros except for each ROI outlined by the contours.
The closest I've come to what I need is from this thread copying non-rectangular roi opencv however I'm working in C# and don't have the coordinates outlining my ROIs because they're often oddly shaped and can vary in location on the image. Some direction or help would be much appreciated!
Alright I figured this out. It appears I was making things more complicated than necessary.
First I created a Matrix to hold the final result. I was under the impression I could specify the background color using the MCvScalar but it didn't seem to affect my final result when I tried different args and always resulted in a black background. If someone has an explanation for this that'd be great.
Matrix<int> blackedOut = new Matrix<int>(img.Size);
crop.SetValue(new MCvScalar(255, 255, 255));
Finding and drawing the contours was completely unnecessary since I already had a mask (maskDilate) which outlined my regions of interest.
So all I had to do was copy the original image to the blackedOut matrix with maskDilate applied.
img.CopyTo(crop, maskDilate);
So if you're in a similar situation to me you should already have a mask that you're passing into FindContours that you can use directly without messing with contours at all.
I used this as a reference https://www.bytefish.de/blog/extracting_contours_with_opencv/
I am using C# to write a program to detect the paper edges and crop out the square edges of the paper from the images.
Below is the image I wish to crop. The paper will always appear at the bottom of the pages.
I had read through these links but I still have no idea how to do it.
OpenCV C++/Obj-C: Detecting a sheet of paper / Square Detection
Edit: I am using EMGU for this OCR project
You can also:
Convert your image to grayscale
Apply ThresholdBinary by the pixel intensity
Find contours.
To see examples on finding contours you can look on this post.
FundContours method doesn't care about the contours size. The only thing to be done here before finding contours is emphasizing them by binarizing the image (and we do this in step 2).
For more info also look at OpenCV docs: findContours, example.
Find proper contour by the size and position of its bounding box.
(In this step we iterate over all found on contours and try to figure out, which one the contour of the paper sheet, using known paper dimensions, proportions of them and the relative positon - left bottom corner of the image ).
Crop the image using bounding box of the sheet of paper.
Image<Gray, byte> grayImage = new Image<Gray, byte>(colorImage);
Image<Bgr, byte> color = new Image<Bgr, byte>(colorImage);
grayImage = grayImage.ThresholdBinary(new Gray(thresholdValue), new Gray(255));
using (MemStorage store = new MemStorage())
for (Contour<Point> contours= grayImage.FindContours(Emgu.CV.CvEnum.CHAIN_APPROX_METHOD.CV_CHAIN_APPROX_NONE, Emgu.CV.CvEnum.RETR_TYPE.CV_RETR_TREE, store); contours != null; contours = contours.HNext)
{
Rectangle r = CvInvoke.cvBoundingRect(contours, 1);
// filter contours by position and size of the box
}
// crop the image using found bounding box
UPD: I have added more details.
Decide on the Paper color
Decide on a delta to allow the color to vary
Decide on points along the bottom to do vertical tests
Do the vertical tests going up, collecting the minimum y where the color stops appearing
Do at least 10-20 such tests
The resulting y should be 1 more than what you want to keep. You may need to insert a limit to avoid cropping everything if the image is too bright. Either refine the algorithm or mark such an the image as an exception for manual treatment!
To crop you use the DrawImage overload with source and dest rectangles!
Here are a few more hints:
To find the paper color you can go up right from the left bottom edge diagonally until you hit a pixel with a Color.GetBrightness of > 0.8; then go further for 2 pixels to get clear of any antialiased pixels.
A reasonable delta will depend on your images; start with 10%
Use a random walk along the bottom; when you are done maybe add one extra pass in the close vicinity of the minimum found in pass one.
The vertical test can use GetPixel to get at the colors or if that is too slow you may want to look into LockBits. But get the search algorithm right first, only then think about optimizing!
If you run into trouble with your code, expand your question!
I am trying to improve the quality of an image.
I use emgu for this.
I use this code to change the contrast (I think!).
Image<Bgr, byte> improveMe = new Image<Bgr, byte>(grid);
improveMe._EqualizeHist();
For a daytime image I get this:
For a night time image I get this:
Obviously, not so good!
The 1st image is nice and lush and the 2nd is as you can see could be descirbed as over-exposed.
Are there ways to avoid get such a poor image at night-time? Is it because the image is now lower in color channels (if that makes sense)? Should I check for min/max color ranges of an image before deciding to apply this filter? Should I use a completely different filter?
Reading material links are welcome answers as well...
There are many ways to do this some better then others depending whether is a video feed, still and even camera shutter speed... and so on.
I would recommend you to try "adaptive threshold" (EMGU ThresholdAdaptive function) and also check some white balance algorithms. Check this one: White balance algorithm
I got a bitmap as a source;
I created a Emgu image with Image<Bgr,Byte> img = new Image<Bgr,Byte>(bmp);
I converted it to a YCbCr image using Image<Ycc,Byte> YCB = img.Convert<Ycc,Byte>();
I dragged a imagebox from the toolbox and assigned it with YCB -----> imagebox1.Image=YCB;
but the result shows the image in RGB format just like source bitmap
I don't understand where went wrong
Could someone give me some clues?
Have you made any alterations to YCB? If you simply display it then it will look identical to the original.
If you right click on your imagebox when running your program and select property it will tell you the type of image and show you the data held within the YCB image this should be different from your original. Alternatively just show 1 channel of your image matrix so for your BGR show the blue colour this will obviously be displayed as a single colour grayscale image. Now for the YCB show the Luma channel again this will be displayed as a single colour grayscale image. You will notice a slight change between them as the luma represents the luminance of all 3 colour spectrums.
CvInvoke.cvShowImage("Blue", img [0]);
CvInvoke.cvShowImage("Luma", YCB[0]);
If you want to see a greater difference multiply the Luma by 2 and you have a completely different image.
YCB[0] *= 2;
CvInvoke.cvShowImage("Luma Change", YCB);
The CvInvoke.cvShowImage() method is a great tool for debugging your code and seeing what your code is doing to images step by step.
For the benefits of others Ycc is YCbCr colour space: http://en.wikipedia.org/wiki/YCbCr
Cheers,
Chris
I have an image where I need to change the background colour (E.g. changing the background of the example image below to blue).
However, the image is anti-aliased so I cannot simply do a replace of the background colour with a different colour.
One way I have tried is creating a second image that is just the background and changing the colour of that and merging the two images into one, however this does not work as the border between the two images is fuzzy.
Is there any way to do this, or some other way to achieve this that I have no considered?
Example image
Just using GDI+
Image image = Image.FromFile("cloud.png");
Bitmap bmp = new Bitmap(image.Width, image.Height);
using (Graphics g = Graphics.FromImage(bmp)) {
g.Clear(Color.SkyBlue);
g.InterpolationMode = InterpolationMode.NearestNeighbor;
g.PixelOffsetMode = PixelOffsetMode.None;
g.DrawImage(image, Point.Empty);
}
resulted in:
Abstractly
Each pixel in your image is a (R, G, B) vector, where each component is in the range [0, 1]. You want a transform, T, that will convert all of the pixels in your image to a new (R', G', B') under the following constraints:
black should stay black
T(0, 0, 0) = (0, 0, 0)
white should become your chosen color C*
T(1, 1, 1) = C*
A straightforward way to do this is to choose the following transform T:
T(c) = C* .* c (where .* denotes element-wise multiplication)
This is just standard image multiplication.
Concretely
If you're not worried about performance, you can use the (very slow) methods GetPixel and SetPixel on your Bitmap to apply this transform for each pixel in it. If it's not clear how to do this, just say so in a comment and I'll add a detailed explanation for that part.
Comparison
Compare this to the method presented by LarsTech. The method presented here is on the top; the method presented by LarsTech is on the bottom. Notice the undesirable edge effects on the bottom icon (white haze on the edges).
And here is the image difference of the two:
Afterthought
If your source image has a transparent (i.e. transparent-white) background and black foreground (as in your example), then you can simply make your transform T(a, r, g, b) = (a, 0, 0, 0) then draw your image on top of whatever background color you want, as LarsTech suggested.
If it is a uniform colour you want to replace you could convert this to an alpha. I wouldn't like to code it myself!
You could use GIMP's Color To Alpha source code (It's GPL), here's a version of it
P.S. Not sure how to get the latest.
Background removal /replacement, IMO is more art than science, you’ll not find one algorithm fit all solution for this BUT depending on how desperate or interested you are in solving this problem, you may want to consider the following explanation:
Let’s assume you have a color image.
Use your choice of decoding mechanism and generate a gray scale / luminosity image of your color image.
Plot a graph (metaphorically speaking) of numeric value of the pixel(x) vs number of pixels in the image for that value(y). Aka. a luminosity histogram.
Now if your background is large enough (or small), you’d see a part of the graph representing the distribution of a range of pixels which constitute your background. You may want to select a slightly wider range to handle the anti-aliasing (based on a fixed offset that you define if you are dealing with similar images) and call it the luminosity range for your background.
It would make your life easier if you know at least one pixel (sample/median pixel value) out of the range of pixels which defines your background, that way you can ‘look up’ the part of the graph which defines your background.
Once you have the range of luminosity pixels for the background, you may run through the original image pixels, compare their luminosity values with the range you have, if it falls within, replace the pixel in the original image with the desired color, preferably luminosity shifted based on the original pixel and the sample pixel, so that the replaced background looks anti-aliased too.
This is not a perfect solution and there are a lot of scenarios where it might fail / partially fail, but again it would work for the sample image that you had attached with your question.
Also there are a lot of performance improvement opportunities, including GPGPU etc.
Another possible solution would be to use some of the pre-built third party image processing libraries, there are a few open source such as Camellia but I am not sure of what features are provided and how sophisticated they are.