I am trying to improve the quality of an image.
I use emgu for this.
I use this code to change the contrast (I think!).
Image<Bgr, byte> improveMe = new Image<Bgr, byte>(grid);
improveMe._EqualizeHist();
For a daytime image I get this:
For a night time image I get this:
Obviously, not so good!
The 1st image is nice and lush and the 2nd is as you can see could be descirbed as over-exposed.
Are there ways to avoid get such a poor image at night-time? Is it because the image is now lower in color channels (if that makes sense)? Should I check for min/max color ranges of an image before deciding to apply this filter? Should I use a completely different filter?
Reading material links are welcome answers as well...
There are many ways to do this some better then others depending whether is a video feed, still and even camera shutter speed... and so on.
I would recommend you to try "adaptive threshold" (EMGU ThresholdAdaptive function) and also check some white balance algorithms. Check this one: White balance algorithm
Related
I have a C# winform app.
I am utilising Emgu framework to help me to detect motion between frames.
I have 1 issue. At night time or when it is a dull image due to a dull day and the object I want to detect has low value colours (like black and brown and dark green) it is sometimes difficult to detect this motion.
I had hit upon the idea of enhancing the image when it is a dull image frame.
I would 1st have to work out the 'average' contrast of an image to determine whether I need to increase the contrast of that image.
What would be the best way to do this?
I have converted the RGB image to an HSV image. But I am unsure which values/channels to use to perceive whether the overall image is low in contrast.
I have looked around for a formula that would measure this based on Hue, Saturation and Luminance/brightness,
So far I have this:
C = ((100.0 + T) / 100.0)2 taken from this site: enter link description here
'T' is defined as the variable Threshold. Now this is where I come unstuck.
What is this variable threshold?
What should I base it on?
Should I look elsewhere for an answer?
Did you look to Histogram equalization ? I use it to enhance automatically image contrast (but I used it only on grayscale images).
Currently, I am working on a real-time IRIS detection application.
I want to perform an invert operation to the frames taken from the web camera, like this:
I managed to get this line of code, but this is not giving the above results. Maybe parameters need to be changed, but I am not sure.
CvInvoke.cvThreshold(grayframeright, grayframeright, 160, 255.0, Emgu.CV.CvEnum.THRESH.CV_THRESH_BINARY_INV);
From the images above it feels that the second image is the negative of the first image(correct me if I am wrong),
The function you are using is a threshold function,i.e. will render everything as white if it falls between the specified color range and other wise it will render it as black.
To find the negative of an image you can use one of the following methods.
Taking the NOT of an image.
Image<Bgr, Byte> img2 = img1.Not();// imag1 can be a static image or your current captured frame
for more details you can refer the documentation here.
If you want to invert an image you can do the following:
Mat white = Mat::ones(grayframeright.rows, grayframeright.cols, grayframeright.type);
Mat dst = white - grayframeright;
Also note that pupil can be detected with OpenCV detector initialized with HAAR cascade for eyes that OpenCV code comes with.
I am working on a project where I need to play with two bitmaps. I am putting them in a grid one over the other with reduced opacity (to give a watermark effect).
I am rendering the grid to a bitmap using RenderTargetBitmap and saving the bitmap to a file.
Now my requirement is to load the rendered bitmap again and recover the original pictures separately. Is there any way to recover the original images again. I am not able to think any algorithm to implement this.
My aim is to give a watermarking effect and then recover the images individually.
No. The information is lost during "flattening" of the image.
You need to save both images and information about their properties (position, opacity) into single file. And restore it on load.
If your goal is to simulate watermarking and allow later 'dewatermarking', then assuming that you have your watermarking bitmap present at decoding time, you probably can do that. Sure you cannot use your initial approach - just simple merging of two layers is not reversible.
You need to use some reversible transformation, like rotating source image pixel RGB values vector, using watermark image pixel RGB values as parameters. While dewatermarking you just use negative values from watermark image.
Well, RGB vector is not ideal - you can go out of RGB space while rotating it. Probably you can find color space (or some other transformation in RGB space), better suited to your goal.
(English is not my first or even second language, thereby I apologize if you can't understand my idea - just ask over.)
Why don't you try to make it two layers of bitmap?
i wonder if you can use TIFF format, where you can store multiple images. that way on display you can choose to show with/without watermark.
I have an image that is a depth heatmap that I've filtered out anything further away than the first 25% of the image.
It looks something like this:
There are two blobs of color in the image, one is my hand (with part of my face behind it), and the other is the desk in the lower left corner. How can I search the image to find these blobs? I would like to be able to draw a rectangle around them if possible.
I can also do this (ignore shades, and filter to black or white):
Pick a random pixel as a seed pixel. This becomes area A. Repeatedly expand A until A doesn't get any bigger. That's your area.
The way to expand A is by looking for neighbor pixels to A, such that they have similar color to at least one neighboring pixel in A.
What "similar color" means to you is somewhat variable. If you can make exactly two colors, as you say in another answer, then "similar" is "equal". Otherwise, "similar" would mean colors that have RGB values or whatnot where each component of the two colors is within a small amount of each other (i.e. 255, 128, 128 is similar to 252, 125, 130).
You can also limit the selected pixels so they must be similar to the seed pixel, but that works better when a human is picking the seed. (I believe this is what is done in Photoshop, for example.)
This can be better than edge detection because you can deal with gradients without filtering them out of existence, and you don't need to process the resulting detected edges into a coherent area. It has the disadvantage that a gradient can go all the way from black to white and it'll register as the same area, but that may be what you want. Also, you have to be careful with the implementation or else it will be too slow.
It might be overkill for what you need, but there's a great wrapper for C# for the OpenCV libraries.
I have successfully used OpenCV in C++ for blob detection, so you might find it useful for what you're trying to do.
http://www.emgu.com/wiki/index.php/Main_Page
and the wiki page on OpenCV:
http://en.wikipedia.org/wiki/OpenCV
Edited to add: Here is a blobs detection library for Emgu in C#. There is even some nice features of ordering the blobs by descending area (useful for filtering out noise).
http://www.emgu.com/forum/viewtopic.php?f=3&t=205
Edit Again:
If Emgu is too heavyweight, Aforge.NET also includes some blob detection methods
http://www.aforgenet.com/framework/
If the image really is only two or more distinct colours (very little blur between colours), it is an easy case for an edge detection algorithm.
You can use something like the code sample from this question : find a color in an image in c#
It will help you find the x/y of specific colors in your image. Then you could use the min x/max x and the min y/max y to draw your rectangles.
Detect object from image based on object color by C#.
To detect a object based on its color, there is an easy algorithm for that. you have to choose a filtering method. Steps normally are:
Take the image
Apply ur filtering
Apply greyscalling
Subtract background and get your objects
Find position of all objects
Mark the objects
First you have to choose a filtering method, there are many filtering method provided for C#. Mainly I prefer AForge filters, for this purpose they have few filter:
ColorFiltering
ChannelFiltering
HSLFiltering
YCbCrFiltering
EuclideanColorFiltering
My favorite is EuclideanColorFiltering. It is easy and simple. For information about other filters you can visit link below. You have to download AForge dll for apply these in your code.
More information about the exact steps can be found here: Link
How can we identify that a given image is blur or how much percent is it blur in C#? Is there any API available for that? Or any algorithm that can be helpful in that?
Thanks!
You could perform a 2D-FFT and search the frequency coefficients for a value over a certain threshold (to elimate false-positives from rounding/edge errors). A blurred image will never have high frequency coefficients (large X/Y values in frequency-space).
If you want to compare with a certain blurring algorithm, run a single pixel through a 2D-FFT and check further images to see if they have frequency components outside the range of the reference FFT. This means you can use the same algorithm regardless of what type of blurring algorithm is used (box blur, gaussian, etc)
For a very specific problem (finding blurred photos shot to an ancient book), I set up this script, based on ImageMagick:
https://gist.github.com/888239
Given a blurred bitmap alone, you probably can't.
Given the original bitmap and the blurred bitmap you could compare the two pixel by pixel and use a simple difference to tell you how much it is blurred.
Given that I'm guessing you don't have the original image, it might be worthing looking at performing some kind of edge detection on the blurred image.
This Paper suggests a method using Harr Wavelet Transform, but as other posters have said, this is a fairly complex subject.
Although, I am posting this answer after 8-9 years after asking the question. At that time, I resolved this problem by applying blur on image and then comparing with the original one.
The idea is, when we apply blur on a non-blurry image and then compare them, the difference in images is very high. But when we apply blur on a blurry image, the difference in images is almost 10%.
In this way, we resolved our problem of identifying blur images and the results were quite good.
Results were published in following conference paper:
Creating digital life stories through activity recognition with image filtering (2010)
Err ... do you have the original image? What you ask is not a simple thing to do, though ... if its even possible
This is just a bit of a random though but you might be able to do it by fourier transforming the "blurred" and the original image and seeing if you can get something that has a very similar frequency profiles by progressively low pass filtering the original image in the frequency domain. Testing for "similarity" would be fairly complex in itself though.