Handwriting recognition
Handwriting recognition (or HWR[1]) is the ability of a computer to receive and interpret intelligible handwritten input from sources such as paper documents, photographs, touch-screens and other devices. The image of the written text may be sensed "off line" from a piece of paper by optical scanning (optical character recognition) or intelligent word recognition. Alternatively, the movements of the pen tip may be sensed "on line", for example by a pen-based computer screen surface, a generally easier task as there are more clues available.
Handwriting recognition principally entails optical character recognition. However, a complete handwriting recognition system also handles formatting, performs correct segmentation into characters and finds the most plausible words.
Off-line recognition
Off-line handwriting recognition involves the automatic conversion of text in an image into letter codes which are usable within computer and text-processing applications. The data obtained by this form is regarded as a static representation of handwriting. Off-line handwriting recognition is comparatively difficult, as different people have different handwriting styles. And, as of today, OCR engines are primarily focused on machine printed text and ICR for hand "printed" (written in capital letters) text.
Problem domain reduction techniques
Narrowing the problem domain often helps increase the accuracy of handwriting recognition systems. A form field for a U.S. ZIP code, for example, would contain only the characters 0-9. This fact would reduce the number of possible identifications.
Primary techniques:
- Specifying specific character ranges
- Utilization of specialized forms
Character extraction
Off-line character recognition often involves scanning a form or document written sometime in the past. This means the individual characters contained in the scanned image will need to be extracted. Tools exist that are capable of performing this step.[2] However, there are several common imperfections in this step. The most common is when characters that are connected are returned as a single sub-image containing both characters. This causes a major problem in the recognition stage. Yet many algorithms are available that reduce the risk of connected characters.
Character recognition
After the extraction of individual characters occurs, a recognition engine is used to identify the corresponding computer character. Several different recognition techniques are currently available.
Neural networks
Neural network recognizers learn from an initial image training set. The trained network then makes the character identifications. Each neural network uniquely learns the properties that differentiate training images. It then looks for similar properties in the target image to be identified. Neural networks are quick to set up; however, they can be inaccurate if they learn properties that are not important in the target data.
Feature extraction
Feature extraction works in a similar fashion to neural network recognizers. However, programmers must manually determine the properties they feel are important.
Some example properties might be:
- Aspect Ratio.
- Percent of pixels above horizontal half point
- Percent of pixels to right of vertical half point
- Number of strokes
- Average distance from image center
- Is reflected y axis
- Is reflected x axis
This approach gives the recognizer more control over the properties used in identification. Yet any system using this approach requires substantially more development time than a neural network because the properties are not learned automatically.
On-line recognition and off-line recognition
On-line handwriting recognition involves the automatic conversion of text as it is written on a special digitizer or PDA, where a sensor picks up the pen-tip movements as well as pen-up/pen-down switching. This kind of data is known as digital ink and can be regarded as a digital representation of handwriting. The obtained signal is converted into letter codes which are usable within computer and text-processing applications.
The elements of an on-line handwriting recognition interface typically include:
- a pen or stylus for the user to write with.
- a touch sensitive surface, which may be integrated with, or adjacent to, an output display.
- a software application which interprets the movements of the stylus across the writing surface, translating the resulting strokes into digital text. And an off-line recognition is the problem.
General process
The process of online handwriting recognition can be broken down into a few general steps:
- preprocessing,
- feature extraction and
- classification
The purpose of preprocessing is to discard irrelevant information in the input data, that can negatively affect the recognition.[3] This concerns speed and accuracy. Preprocessing usually consists of binarization, normalization, sampling, smoothing and denoising.[4] The second step is feature extraction. Out of the two- or more-dimensional vector field received from the preprocessing algorithms, higher-dimensional data is extracted. The purpose of this step is to highlight important information for the recognition model. This data may include information like pen pressure, velocity or the changes of writing direction. The last big step is classification. In this step various models are used to map the extracted features to different classes and thus identifying the characters or words the features represent.
Hardware
Commercial products incorporating handwriting recognition as a replacement for keyboard input were introduced in the early 1980s. Examples include handwriting terminals such as the Pencept Penpad[5] and the Inforite point-of-sale terminal.[6] With the advent of the large consumer market for personal computers, several commercial products were introduced to replace the keyboard and mouse on a personal computer with a single pointing/handwriting system, such as those from PenCept,[7] CIC[8] and others. The first commercially available tablet-type portable computer was the GRiDPad from GRiD Systems, released in September 1989. Its operating system was based on MS-DOS.
In the early 1990s, hardware makers including NCR, IBM and EO released tablet computers running the PenPoint operating system developed by GO Corp.. PenPoint used handwriting recognition and gestures throughout and provided the facilities to third-party software. IBM's tablet computer was the first to use the ThinkPad name and used IBM's handwriting recognition. This recognition system was later ported to Microsoft Windows for Pen Computing, and IBM's Pen for OS/2. None of these were commercially successful.
Advancements in electronics allowed the computing power necessary for handwriting recognition to fit into a smaller form factor than tablet computers, and handwriting recognition is often used as an input method for hand-held PDAs. The first PDA to provide written input was the Apple Newton, which exposed the public to the advantage of a streamlined user interface. However, the device was not a commercial success, owing to the unreliability of the software, which tried to learn a user's writing patterns. By the time of the release of the Newton OS 2.0, wherein the handwriting recognition was greatly improved, including unique features still not found in current recognition systems such as modeless error correction, the largely negative first impression had been made. After discontinuation of Apple Newton, the feature has been ported to Mac OS X 10.2 or later in form of Inkwell (Macintosh).
Palm later launched a successful series of PDAs based on the Graffiti recognition system. Graffiti improved usability by defining a set of "unistrokes", or one-stroke forms, for each character. This narrowed the possibility for erroneous input, although memorization of the stroke patterns did increase the learning curve for the user. The Graffiti handwriting recognition was found to infringe on a patent held by Xerox, and Palm replaced Graffiti with a licensed version of the CIC handwriting recognition which, while also supporting unistroke forms, pre-dated the Xerox patent. The court finding of infringement was reversed on appeal, and then reversed again on a later appeal. The parties involved subsequently negotiated a settlement concerning this and other patents Graffiti (Palm OS).
A Tablet PC is a special notebook computer that is outfitted with a digitizer tablet and a stylus, and allows a user to handwrite text on the unit's screen. The operating system recognizes the handwriting and converts it into typewritten text. Windows Vista and Windows 7 include personalization features that learn a user's writing patterns or vocabulary for English, Japanese, Chinese Traditional, Chinese Simplified and Korean. The features include a "personalization wizard" that prompts for samples of a user's handwriting and uses them to retrain the system for higher accuracy recognition. This system is distinct from the less advanced handwriting recognition system employed in its Windows Mobile OS for PDAs.
Although handwriting recognition is an input form that the public has become accustomed to, it has not achieved widespread use in either desktop computers or laptops. It is still generally accepted that keyboard input is both faster and more reliable. As of 2006, many PDAs offer handwriting input, sometimes even accepting natural cursive handwriting, but accuracy is still a problem, and some people still find even a simple on-screen keyboard more efficient.
Software
Initial software modules could understand print handwriting where the characters were separated. Author of the first applied pattern recognition program in 1962 was Shelia Guberman, then in Moscow.[9] Commercial examples came from companies such as Communications Intelligence Corporation and IBM.
In the early 1990s, two companies, ParaGraph International, and Lexicus came up with systems that could understand cursive handwriting recognition. ParaGraph was based in Russia and founded by computer scientist Stepan Pachikov while Lexicus was founded by Ronjon Nag and Chris Kortge who were students at Stanford University. The ParaGraph CalliGrapher system was deployed in the Apple Newton systems, and Lexicus Longhand system was made available commercially for the PenPoint and Windows operating system. Lexicus was acquired by Motorola in 1993 and went on to develop Chinese handwriting recognition and predictive text systems for Motorola. ParaGraph was acquired in 1997 by SGI and its handwriting recognition team formed a P&I division, later acquired from SGI by Vadem. Microsoft has acquired CalliGrapher handwriting recognition and other digital ink technologies developed by P&I from Vadem in 1999.
Wolfram Mathematica (8.0 or later) also provides a handwriting or text recognition function TextRecognize.
Research
Handwriting Recognition has an active community of academics studying it. The biggest conferences for handwriting recognition are the International Conference on Frontiers in Handwriting Recognition (ICFHR), held in even-numbered years, and the International Conference on Document Analysis and Recognition (ICDAR), held in odd-numbered years. Both of these conferences are endorsed by the IEEE. Active areas of research include:
- Online Recognition
- Offline Recognition
- Signature Verification
- Postal-Address Interpretation
- Bank-Check Processing
- Writer Recognition
A survey of research on handwriting recognition (2000) is by R. Plamondon and S. N. Srihari.[11] In India, the Technology Development for Indian Languages (TDIL[12]), under the Department of Information Technology,[13] Government of India, funded a national level research consortium in 2007 on online handwriting recognition in several Indian languages, led by Prof. A. G. Ramakrishnan, Medical intelligence and language engineering lab, Department of Electrical Engineering,[14] Indian Institute of Science, Bangalore. A good system has been developed for recognition of online handwriting in Tamil.[15][16][17]
Results since 2009
Since 2009, the recurrent neural networks and deep feedforward neural networks developed in the research group of Jürgen Schmidhuber at the Swiss AI Lab IDSIA have won several international handwriting competitions.[18] In particular, the bi-directional and multi-dimensional Long short-term memory (LSTM)[19][20] of Alex Graves et al. won three competitions in connected handwriting recognition at the 2009 International Conference on Document Analysis and Recognition (ICDAR), without any prior knowledge about the three different languages (French, Arabic, Persian) to be learned. Recent GPU-based deep learning methods for feedforward networks by Dan Ciresan and colleagues at IDSIA won the ICDAR 2011 offline Chinese handwriting recognition contest; their neural networks also were the first artificial pattern recognizers to achieve human-competitive performance[21] on the famous MNIST handwritten digits problem[22] of Yann LeCun and colleagues at NYU.
See also
- Optical character recognition
- Intelligent character recognition
- AI effect
- Applications of artificial intelligence
- Handwriting movement analysis
- Neocognitron
- Pen computing
- Live Ink Character Recognition Solution
- Sketch recognition
- Tablet PC
Lists
References
- ↑ http://acronyms.thefreedictionary.com/HWR
- ↑ Java OCR, 5 June 2010. Retrieved 5 June 2010
- ↑ Huang, B.; Zhang, Y. and Kechadi, M.; Preprocessing Techniques for Online Handwriting Recognition. Intelligent Text Categorization and Clustering, Springer Berlin Heidelberg, 2009, Vol. 164, "Studies in Computational Intelligence" pp. 25–45.
- ↑ Holzinger, A.; Stocker, C.; Peischl, B. and Simonic, K.-M.; On Using Entropy for Enhancing Handwriting Preprocessing, Entropy 2012, 14, pp. 2324-2350.
- ↑ Pencept Penpad (TM) 200 Product Literature, Pencept, Inc., 1982-08-15
- ↑ Inforite Hand Character Recognition Terminal, Cadre Systems Limited, England, 1982-08-15
- ↑ Users Manual for Penpad 320, Pencept, Inc., 1984-06-15
- ↑ Handwriter (R) GrafText (TM) System Model GT-5000, Communication Intelligence Corporation, 1985-01-15
- ↑ Guberman is the inventor of the handwriting recognition technology used today by Microsoft in Windows CE. Source: In-Q-Tel communication, june 3, 2003
- ↑ S. N. Srihari and E. J. Keubert, "Integration of handwritten address interpretation technology into the United States Postal Service Remote Computer Reader System" Proc. Int. Conf. Document Analysis and Recognition (ICDAR) 1997, IEEE-CS Press, pp. 892–896
- ↑ R. Plamondon and S. N. Srihari (2000). "On-line and off-line handwriting recognition: a comprehensive survey". In: IEEE Transactions on Pattern Analysis and Machine Intelligence 22(1), 63–84.
- ↑ "TDIL". Department of Electronics & Information Technology(DeitY), MCIT, Govt of India. Retrieved 7 December 2014.
- ↑ "DeitY". Department Of Electronics & Information Technology, Government Of India. Retrieved 7 December 2014.
- ↑ A G, Ramakrishnan. "DEPARTMENT OF ELECTRICAL ENGINEEING". http://www.ee.iisc.ernet.in/. Indian Institute of Science, Bangalore, India. Retrieved 7 December 2014. External link in
|website=
(help) - ↑ Suresh Sundaram and A. G. Ramakrishnan, Attention-feedback based robust segmentation of online handwritten isolated Tamil words. ACM Transactions on Asian Language Information Processing (TALIP), Vol. 12 (1), March 2013, Article No. 4.
- ↑ Suresh Sundaram and A. G. Ramakrishnan, Performance enhancement of online handwritten Tamil symbol recognition with reevaluation techniques. Pattern Analysis and Application, DOI - 10.1007/s10044-013-0353-7, Volume 17 Issue 3, Pages 587-609, August 2014
- ↑ Suresh Sundaram and A. G. Ramakrishnan, Bigram language models and reevaluation strategy for improved recognition of online handwritten Tamil words. ACM Transactions on Asian and Low-Resource Language Information Processing (TALLIP), Vol. 14 (2), June 2015
- ↑ 2012 Kurzweil AI Interview with Jürgen Schmidhuber on the eight competitions won by his Deep Learning team 2009-2012
- ↑ Graves, Alex; and Schmidhuber, Jürgen; Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks, in Bengio, Yoshua; Schuurmans, Dale; Lafferty, John; Williams, Chris K. I.; and Culotta, Aron (eds.), Advances in Neural Information Processing Systems 22 (NIPS'22), December 7th–10th, 2009, Vancouver, BC, Neural Information Processing Systems (NIPS) Foundation, 2009, pp. 545–552
- ↑ A. Graves, M. Liwicki, S. Fernandez, R. Bertolami, H. Bunke, J. Schmidhuber. A Novel Connectionist System for Improved Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 5, 2009.
- ↑ D. C. Ciresan, U. Meier, J. Schmidhuber. Multi-column Deep Neural Networks for Image Classification. IEEE Conf. on Computer Vision and Pattern Recognition CVPR 2012.
- ↑ LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proc. IEEE, 86, pp. 2278-2324.
External links
- Annotated bibliography of references to gesture and pen computing
- Notes on the History of Pen-based Computing (YouTube)