Our website allows people to upload images. However, we don't allow watermarked images, yet many do still get uploaded by users. Is there some software/code that can (at least in most cases) catch images that do have watermarks such as logos/images? I'm not sure if there is some sort of a standard.
You can do it via image classification.
Basically, train a CNN(Convolutional neural Network) model by feeding in some images with watermark and some without watermark in it and then use this model to judge the probability of watermark in any new image.
You can apply transfer learning on some existing pre-trained models(as of today inception v3 is the best out there) which can be retrained for your specific classification purpose.
For example this link shows how to do it to identify whether an image is that of a sunflower or a daisy or a rose.
https://www.tensorflow.org/tutorials/image_retraining
Here is a quick 5 minute tutorial about building a tensorflow image classifier: https://youtu.be/QfNvhPx5Px8
To detect any kind of logo on an image would be quite complicated. You would need something similar to face recognition, and a lot of AI...
To make it reasonably efficient you would need a library of logos to look for, and know where they are applied on the images. If the logo is always in the same place, you could just mask out the pixels where it would be, and calculate how close it is to the pixels of the logo. If logos varies in size and position, it gets more complicated.
You can't automatically detect a watermark. The best thing to do is make it real easy for others to report images that have a watermark and once reported, put them in a holding state where they aren't displayed until it's verified they either do or don't have a watermark.
With certain kind of AI it would be possible, at least with certain probability.
More precisely said it IS possible provided that you CAN define what the watermark is,
which is the greatest problem. Generic watermark detection is virtually undetectable,
consider logo at billboard at photo etc.
Related
I was experiencing some performance issues when displaying a large number of images, I discovered the issue was that the full resolution image was being used when I really only needed an image less than 1/4 of the size. So I added a line in between BeginInit and EndInit to set the DecodePixelWidth to 200 which is the max width I will need in my layout. My performance was no longer an issue but some of the images are really small, definitely nowhere close to 200 pixels wide. Most of the images seemed to display correctly, and there doesn't seem to be any rhyme or reason to which are too small and which work correctly. I thought it might be due to differences in the original dimensions of the images but there was no pattern to the results. I tried bumping the width up to 600 which then allowed the offending images to display at the correct width of 200 but the performance then suffers.
At this point I am not even sure where to start looking, and would be really grateful for a kick in the right direction.
EDIT: Some more information below about the images and how I am using them.
Most of the images are around 1000X1500 although some have odd dimensions like 1000x1513. All of the images are JPEG. Currently each image is placed in a custom user control that I designed, nothing too fancy just a background around it with some text. Each user control is then placed in a grid, in its own row/column. The grid is inside a scrollviewer so the user can scroll through the list. This might not be the best way to accomplish what I am looking for but its what I came up with quickly and it works for the most part. Id be happy to switch to another method of display if it would accomplish what i want in an easier or more concise way.
The intended result is for a movie browsing app. There will be a scrollable list of movies, each represented as its own tile complete with title, movie poster, genre info, rating, and description. This list will be sortable on various items. The information about the movies is stored in a sql database on another machine. The images are originally stored on another machine but are copied locally to improve performance.
EDIT: I have been able to solve the issue by not using DecodePixelWidth and instead saving a copy of the image at the desired size, this has improved performance. Youngjae's recommendation of not using DecodePixelWidth along with his mention of using a virtualized list led me to the following set of articles on creating a Virtualized Wrap Panel which should solve any other performance issues. The article is for silverlight but from what I understand silverlight is basically a watered down version of wpf, if it works in silverlight it should work in wpf. It shouldn't be too difficult to convert it for my use.
Part 1 - MeasureOverride
Part 2 - ArrangeOverride
Part 3 - Animation
Part 4 - Virtualization
In MSDN link, you can find words as below.
The JPEG and Portable Network Graphics (PNG) codecs natively decode the image to the specified size; other codecs decode the image at its original size and scale the image to the desired size.
And, I recommend you NOT to use DecodePixelWidth for resizing purpose because of the above reason.
I don't know your original image sizes and formats, but isn't it enough to use with virtualized list and <Image Width="200" Stretch="Uniform">?
I have a bitmap image like this
My requirement is to create a GUI to load the image and for changing the contrast and other things on the image and algorithm to mark the particular area in silver colour as shown in the fig using C++ or C#.I am new to image processing and through my search I have found out that I can use the Histogram of the image for finding the required area.These are the steps.
Get the histogram
Search for intensity difference
Search for break in the line
Can someone suggest me how can I proceed from here.Can I use Opencv for this or any other efficient methods are available..?
NOTE:
This image have many bright points and the blob algorithm is not successful.
Any other suggestions to retrieve the correct coordinates of the rectangle like object.
Thanks
OpenCV should work.
Convert your input image to greyscale.
adaptiveThreshold converts it to black and white
Feature detection has a whole list of OpenCV feature detectors; choose one depending on the exact feature that you're trying to detect.
E.g. have a look at the Simple Blob Detector which lists the basic steps needed. Your silver rectangle certainly qualifies as "simple blob" (no holes or other hard bits)
If all of your pictures look like that, it seems to me not complicate to segment the silver area and find its centre. Basically you will need to apply these algorithms in the sequence below:
I would suggest binaryze the image using Otsu adaptive threshold algorithm
Apply a labelling (blob) algorithm
If you have some problem with noise you can use an opening filter or median before the blob algorithm
If you end up with only one blob (with the biggest area I guess) use moment algorithm to find its centre of mass. Then you have the X,Y coordinate you are looking for
These algorithms are classical image processing, I guess it wouldn't be hard to find then. In any case, I may have they implemented in C# and I can post here latter in case you think they solve your problem.
May be a research on Directshow, a multi media framework from Microsoft will help you to accomplish your task.
i have 4 shapes in image
i want to get pixels of one shape in list of point
the shapes have same color
List<point> GetAllPixelInShape(point x)
{
//imp
}
where x point of this shape
Long story short, you could begin with a connected components / region labeling algorithm.
http://en.wikipedia.org/wiki/Connected-component_labeling
In OpenCV you can call findContours() to identify contours, which are the borders of your connected regions.
http://dasl.mem.drexel.edu/~noahKuntz/openCVTut7.html
OCR is an extremely difficult task, especially for a script like Arabic. Creating an OCR algorithm from scratch takes a lot of work and numerous algorithms working together. OCR for machine printed text is hard enough. Implementing an algorithm to read handwriting is not something I'd suggest trying until you have a year or two of image processing experience. If you haven't read textbooks and academic papers on OCR, you're likely to spend a lot of time reproducing work that has already been done.
If you're not familiar with contour tracing and/or blob analysis, then working with OpenCV may not be a good first step. Since you have a specific goal in mind, you might first try different algorithms in a user-friendly GUI that will save you coding time.
Consider downloading ImageJ so that you can see how the algorithms work. There are plugins for a variety of common image processing algorithms.
http://rsbweb.nih.gov/ij/
Your proposed method signature doesn't provide enough information to solve this. Your method will need to know the bounds of your shape, how long and wide it is etc, ideally a set of points that indicate those bounds.
Once you have those, you could potentially apply the details of this article, in particular the algorithms specified in the answer to solve your problem.
Hy
I take two pictures from a webcam and split them into a 9 pieces. Then i match the pieces of the two pictures. The problem is that my webcam have a picture noise. So my programm thinks that in every piece of the second picture have chanced something.
I need a logical push to solve my problem please help.
The pictures from the web cam will never exactly match - even the slightest change in lighting will cause a difference. For this kind of picture matching you have to use a forgiving algorithm that allows at least some change and still makes a match. Create a histogram of each image, then calculating the difference seems to be a promising approach.
See the following threads on SO (just for examples, there are many more threads):
Image comparison - fast algorithm
Image comparison algorithm
Also I would check out Emgu if you are working with .NET, this is a .NET wrapper for openCV, a computer vision library.
I have a fairly simple situation. I just don't know any specific terms to search for.
I have a single image, in that image I have several other images that follow a basic pattern.
They are rectangles and will possibly have landmark image to base things off of.
An important part, is that I need to detect rotated/mis-scaled sub-images.
Basically what I need to be able to do is split 'business cards' from a single image into properly aligned single images.
As I am also designing the cards to be scanned I can put in whatever symbol or something that would make detection easier (as I said a landmark)
If your example is representative (which I doubt for some reason) then Hough transform is your friend (google it, there are plenty of explanations and code around). With it you'll be able to detect the rectangles.
Some examples of Hough transform in C# are http://www.koders.com/csharp/fid3A88BC1FF95FCA9D6A182698263A40EE7883CF26.aspx and http://www.shedletsky.com/hough/index.html
If what actually happens is that you scan some cards, and you have some control over the process, then I'd suggest that you ensure there is no overlap between cards, and provide a contrasting background (something very different from the cards). Then any edge-detection will get you close enough to what you've drawn in your example, and after that you can use Hough transform.
Alternatively, you can implement the paper http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.59.4239 which uses Hough transform to detect rectangles directly, without edge detection.
If I did not understand your problem, or you need clarifications, please edit your question further and post a comment on this answer.
Try AForge.NET (if you are using C#). It has DocumentSkewChecker which will calculate the angle of rotated image.
You can try ExhaustiveTemplateMatching class of AForge.Net