Just for my personal interest, I see from my research that it's not that easy to start your own OCR. However, I would like to hear ideas on how to achieve the challenge of not just recognising characters, but also giving back the results in the formatted string.
For example, I have an image of a table (imagine that it's an image with "|" and "_" being drawn straight lines):
|Number, AnotherNumber|Some Text|
|1,4 |Blah |
And after using a silent OCR, I get the result as "|Number, AnotherNumber|SomeText|\n|1,4|Blah|"
Any ideas of how could I achieve this, and what available tools/libraries I could make use of? I also would like to write this in C# with Visual Studio 2010. And ideally to work with PDFs but different image formats are fine. I've already looked at some, but they seem non-compatible as they use C++ or C.
Thank you.
Alina.
getting ocr libaries is quite hard (of course just if you dont pant to pay for it)
you could try this one, its not free but if you have office 2007:
http://www.codeproject.com/Articles/41709/How-To-Use-Office-2007-OCR-Using-C
do you know if there is a library in C# or a dictionary that could help me to translate Hiragana to Kanji?
I know that there is the IME of Windows but I would like to customize entirely the design of the candidate list of Kanji for a given Hiragana and it is not possible with this IME.
Exemple : the user writes "toru", first it is translated in Hiragana : "とる"
I would like to have this list of choice:
撮る
取る
盗る
Thanks!
Unfortunatelly I do not know of a c# library. All I found involves importing some native libraries, like in this OS thread: Japanese to Romaji with Kakasi
If you are willing to do so, perhaps JWPce might help.
Although this is implemented as a Japanese text editor, it also contains a dictionary function (it actually contains a multitude of character lookup systems) that do what you want to do.
Possibly you can compile the project and then import those lookup functionality? JPWce is licensed under GPL and you can download both a binary executable and source code directly available from the homepage.
[Edit]
Researching some more I stumbled over mozc at Google Code:
Mozc is a Japanese Input Method Editor (IME) designed for
multi-platform such as Chromium OS, Windows, Mac and Linux. This
open-source project originates from Google Japanese Input.
(BSD license)
I have not looked into it myself yet, but it might be more what you are looking for as it does not have a full application "around it" but instead is intended to be used a library. Just like you wanted.
They also link to a short video how the input looks like: http://www.google.co.jp/ime/
Unfortunatelly, this still is C++, not .NET but it might be a starting point.
Microsoft publishes this as a separate product, called Visual Studio International Pack
http://visualstudiogallery.msdn.microsoft.com/74609641-70BD-4A18-8550-97441850A7A8
I do not know a C# library either. But given that a dictionary might be sufficient, you may want to look into using the IME dictionary that comes with Anthy.
If you download the sources of the most recent version, you'll find dictionary sources in the mkworddic and alt-cannadic directories. Look at the various files ending in .t.
Note that they are encoded in EUC-JP; you might want to convert them to UTF-8.
I am trying to use Graphics.DrawString and TextRenderer.DrawText to laydown on a fixed rectangle some strings with variable number of characters.
However, even using the GDI+ wrapping methods I am not satisfied with result: I would need to control the font kerning (or string character spacing) to give a chance to pack high number of characters strings.
I read about FontStretches but I do not know how to use in winform. Another method is Typography.SetKerning but again I am blank about using it.
Can someone help?!
Round 2:
I know it could be hard, Win32 API has a freetype support which could be the solution to issue.
Practically my aim is to do something similar to "http://stackoverflow.com/questions/4582545/kerning-problems-when-drawing-text-character-by-character", in .NET. Notice that I am working on pre-formed string of arabic language, not user character imput.
My problem is:
(1) identify which library has the wanted kerning function (most probably gdi32.dll), (2) build a c# safe environment to deal with dll calls, (3) implement a call to dll that works in c#.
Can someone help?
Thank you for answering.
If you look at the documentation, its quite easy to find out which does what, and how to use it.
The method Typography.SetKerning is an WPF-only thing, so you won't be able to use it in WinForms.
A quick Google found this article, which shows us how to modify kerning values to GDI text.
Well I'm using a complied .NET version of this OCR which can be found # http://www.pixel-technology.com/freeware/tessnet2/
I have it working, however the aim of this is to translate license plates, sadly the engine really doesn't accurately translate some letters, for example here's an image I scanned to determine the character problems
Result:
12345B7B9U
ABCDEFGHIJKLMNUPIJRSTUVHXYZ
Therefore the following characters are being translated incorrectly:
1, O, Q, W
This doesn't seem too bad, however on my license plates, the result isn't so great:
= H4 ODM
= LDH IFW
Fake Test
= NR4 y2k
As you might be able to tell, I've tried noise reduction, increasing contrast, and remove pixels that aren't absolute black, with no real improvements.
Apparently you can 'learn' the engine new fonts, but I think I would need to re-compile the library for .NET, also it seems this is performed on a Linux OS which I don't have.
http://www.scribd.com/doc/16747664/Tesseract-Trainingfor-Khmer-LanguageFor-Posting
So I'm stuck as what to try next, I've wrote a quick console application purely for testing purposes if anyone wants to try it. If anyone has any ideas/graphic manipulation/library thoughts, I'd appreciate hearing them.
I used Tesseract via Tessnet2 recently (Tessnet2 is a VS2008 C++ wrapper around Tesseract 2.0 made by Rémy Thomas, if I remember well). Let me try to help you with the little knowledge I have concerning this tool:
1st, as I said above, this wrapper is only for Tesseract 2.0, and the newest Tesseract version on Google Code is 3.00 (the code is no longer hosted on Source Forge). There are regular contributors: I saw that version 3.01 or so is planned. So you don't benefit from the last enhancements, including page layout analysis which may help when your license plates are not 100% horizontal.
I asked Rémy for a Tessnet2 .NET wrapper around version 3, he doesn't plan any for now. So as I did, you'll have to do it by yourself !
So if you want to get the latest version of the sources, you can download them from the Subversion repository (everything's described on the dedicated site page) and you'll be able to compile them if you have Visual Studio 2008, since they sources contain a VS2008 solution in the vs2008 sub-folder. This solution is made of VS2008 C++ projects, so to be able to get results in C# you'll have to use .NET P/Invoke with the tessDll built by the project. Again if you need this, I have code examples that may interest you, but you may want to stay with C++ and do your own new WinForm projects, for instance !
When you have achieved to compile (there should not be major problems for that, but tell me if you meet some, I may have met them too :-) ), you'll have in output several binaries that will allow you to do a specific training ! Again, there is a page specially dedicated to Tesseract 3 training. Thanks to this training, you can:
restrain your set of characters, which will automatically remove the punctuation ('/-\' instead of 'A', for instance)
indicate the ambiguities you have detected ('D' instead of 'O' as you could see, 'B' instead of '8' etc) that will be taken into account when you will use your training.
I also saw that Tesseract results are better if you restrain the image to the zone where the letters are located (i.e. no face, no landscape around): in my case, I needed to recognize only a specific zone of cards photos taken from a webcam, so I used image processing to restrain the zone. That was long, of course, but my images came from many different sources so I had no choice. If you can get images that are restrained to the minimum, that will be great !
I hope it was of any help, do not hesitate to give me your remarks and questions !
Hi I've done lots of ocr with tesseract, and I have had some of your problems, too. You ask about IMAGE PROCESSING tools, and I'd recommend "unpaper" (there are windows ports too, see google) That's a nice de-skew, unrotate, remove-borders-and-noise and-so-on program. Great for running before ocr'ing.
If you have a (somewhat) variable background color on your images, I'd recommend the "textcleaner" imagemagick script
I think it's edge detecting and whitening out all non-edgy stuff.
And if you have complex text then "ocropus" could be of use.
Syntax is (on linux): "ocroscript rec-tess "
My setup is
1. textcleaner
2. unpaper
3. ocroups
With these three steps I can read almost anything. Even quite blurry+noisy images taken in uneven lighting, with two columns of tightly packed text comes out very readable. OK maybe your needs aren't that much text, but step 1) & 2) could be of use to you.
I'm currently building a license plate recognition engine for ispy - I got much better results from tesseract when I split the license plate into individual characters and built a new image displayed vertically with white space around them like:
W
4
O
O
M
I think a big problem that tesseract has is it tries to make words out of the horizontal letters and numbers and in the case of license plates with letters and numbers mixed up it will decide that a number is a letter or vice versa. Entering an image with the characters spaced vertically makes it treat them as individual characters instead of text.
A great read! http://robotics.usc.edu/publications/downloads/pub/635/
About your skew problem for license plates:
Issue: When OCR input is taken from a hand-held camera
or other imaging device whose perspective is not fixed like
a scanner, text lines may get skewed from their original
orientation [13]. Based on our experiments, feeding such a
rotated image to our OCR engine produces extremely poor
results.
Proposed Approach: A skew detection process is needed
before calling the recognition engine. If any skew is detected,
an auto-rotation procedure is performed to correct the skew
before processing text further. While identifying the algorithm
to be used for skew detection, we found that many
approaches, such as the one mentioned in [13], are based on
the assumptions that documents have s et margins. However,
this assumption does not always hold in our application.
In addition, traditional methods based on morphological
operations and projection methods are extremely slow and
tend to fail in presence of camera-captured images. In this
work, we choose a more robust approach based on Branchand-
Bound text line finding algorithm (RAST algorithm) [25]
for skew detection and auto-rotation. The basic idea of this
algorithm is to identify each line independently and use the
slope of the best scoring line as the skew angle for the entire
text segment. After detecting the skew angle, rotation is
performed accordingly. Based on our experiments, we found
this algorithm to be highly robust and extremely efficient
and fast. However, it suffered from one minor limitation in
the sense that it failed to detect rotation greater than 30.
We also tried an alternate approach, which could detect any
angle of skew up to 90. However, this approach was based
on presence of some sort of cross on the image. Due to
the lack of extensibility, we decided to stick with RAST
algorithm.
Tesseract 3.0x, by default, penalizes combinations that aren't words and aren't common words. The FAQ describes a method to increase its aversion to such nonsense. You might find it helpful to turn off the penalty for rare or nonexistent words, as described (inversely) here:
http://code.google.com/p/tesseract-ocr/wiki/FAQ#How_to_increase_the_trust_in/strength_of_the_dictionary?
If anyone from the future comes across this question, there is a tool called jTessBoxEditor that makes teaching Tesseract a breeze. All you do is point it at a folder containing sample images, then click a button and it creates your *.learneddata file for you.
ABCocr .NET uses Tesseract3 so that might be appropriate if you need the latest code under .NET.
I am starting the process of writing an application, one part of which is to decode bar codes, however I'm off to a bad start. I am no bar code expert and this is not a common bar code type, so I'm in trouble. I cannot figure out what type of bar code this is, which I have to decode.
I have looked on Wikipedia and some other sites with visual descriptions of different types of bar codes (and how to identify them), however I cannot identify it. Please note that I have tried several free bar code decoding programs and they have all failed to decode this.
So here is a picture of that bar code:
alt text http://www.shrani.si/f/2B/4p/4UCVyP72/barcode.jpg
I hope one of you can recognize it. Also if anyone has worked with this before and knows of a library that can decode them (from an image), I'd love to hear about them.
I'm very thankful for any additional pointers I can receive. Thank you.
zbar thinks it's Code 128 but the decoded string is suspiciously different than the barcode's own caption. Maybe it's a charset difference?
~/src/zebra-0.5/zebraimg$ ./zebraimg ~/src/barcode/reader/barcode.jpg
CODE-128:10657958011502540742
scanned 1 barcode symbols from 1 images in 0.04 seconds
My old copy was called zebra but the library is now called zbar. http://sourceforge.net/projects/zbar/
I don't recognize this bar code - but here are a few sites that might help you (libraries etc.) - assuming you use C# and .NET (you didn't specify in your question):
http://www.idautomation.com/csharp/
http://www.bokai.com/barcode.net.htm
It looks a bit like Code 128 but http://www.onlinebarcodereader.com/ does not recognize it as such. Maybe the image quality isn't good enough.
If you are using Java:
http://code.google.com/p/zxing/
Open Source, supports multiple types of barcodes
A list of software can be found here:
http://www.dmoz.org/Computers/Software/Bar_Code/Decoding/
IANABCE (I Am Not A Barcode Expert), but looking at the barcodes here, I'd say this looks closest to the UCC/EAN-128 symbology, character set 'C'.
Do you know what the barcode is used for? What's the application domain?