Natural language processing
{{otheruses|NLP}}
{{Articleissues|refimprove=July 2008|expand=July 2008|restructure=July 2008}}
{{Merge|Computational linguistics|date=March 2008}}
{{Mergefrom|natural language understanding|date=July 2008}}
'''Natural language processing''' ('''NLP''') is a subfield of [[artificial intelligence]] and [[computational linguistics]]. It studies the problems of automated generation and understanding of [[natural language|natural human languages]].
Natural-language-generation systems convert information from computer databases into normal-sounding human language. Natural-language-understanding systems convert samples of human language into more formal representations that are easier for [[computer]] programs to manipulate.
==Tasks and limitations==
In theory, natural-language processing is a very attractive method of [[human-computer interaction]]. Early systems such as [[SHRDLU]], working in restricted "[[blocks world]]s" with restricted vocabularies, worked extremely well, leading researchers to excessive optimism, which was soon lost when the systems were extended to more realistic situations with real-world [[ambiguity]] and [[complexity]].
Natural-language understanding is sometimes referred to as an [[AI-complete]] problem, because natural-language recognition seems to require extensive knowledge about the outside world and the ability to manipulate it. The definition of "[[understanding]]" is one of the major problems in natural-language processing.
==Concrete problems==
{{see also|Garden path sentence}}
Some examples of the problems faced by natural-language-understanding systems:
* The sentences ''We gave the monkeys the bananas because they were hungry'' and ''We gave the monkeys the bananas because they were over-ripe'' have the same surface grammatical structure. However, the pronoun ''they'' refers to ''monkeys'' in one sentence and ''bananas'' in the other, and it is impossible to tell which without a knowledge of the properties of monkeys and bananas.
* A string of words may be interpreted in different ways. For example, the string ''Time flies like an arrow'' may be interpreted in a variety of ways:
**The common [[simile]]: ''[[time]]'' moves quickly just like an arrow does;
**measure the speed of flies like you would measure that of an arrow (thus interpreted as an imperative) - i.e. ''(You should) time flies as you would (time) an arrow.'';
**measure the speed of flies like an arrow would - i.e. ''Time flies in the same way that an arrow would (time them).'';
**measure the speed of flies that are like arrows - i.e. ''Time those flies that are like arrows'';
**all of a type of flying insect, "time-flies," collectively enjoys a single arrow (compare ''Fruit flies like a banana'');
**each of a type of flying insect, "time-flies," individually enjoys a different arrow (similar comparison applies);
**A concrete object, for example the magazine, ''[[Time (magazine)|Time]]'', travels through the air in an arrow-like manner.
English is particularly challenging in this regard because it has little [[inflectional morphology]] to distinguish between [[parts of speech]].
* English and several other languages don't specify which word an adjective applies to. For example, in the string "pretty little girls' school".
** Does the school look little?
** Do the girls look little?
** Do the girls look pretty?
** Does the school look pretty?
* We will often imply additional information in spoken language by the way we place stress on words. The sentence "I never said she stole my money" demonstrates the importance stress can play in a sentence, and thus the inherent difficulty a natural language processor can have in parsing it. Depending on which word the speaker places the stress, this sentence could have several distinct meanings:
** "'''I''' never said she stole my money" - Someone else said it, but ''I'' didn't.
** "I '''never''' said she stole my money" - I simply didn't ever say it.
** "I never '''said''' she stole my money" - I might have implied it in some way, but I never explicitly said it.
** "I never said '''she''' stole my money" - I said someone took it; I didn't say it was she.
** "I never said she '''stole''' my money" - I just said she probably borrowed it.
** "I never said she stole '''my''' money" - I said she stole someone else's money.
** "I never said she stole my '''money'''" - I said she stole something, but not my money.
==Subproblems==
; [[Speech segmentation]]: In most spoken languages, the sounds representing successive letters blend into each other, so the conversion of the analog signal to discrete characters can be a very difficult process. Also, in [[natural speech]] there are hardly any pauses between successive words; the location of those boundaries usually must take into account [[grammatical]] and [[semantic]] constraints, as well as the [[context]].
; [[Text segmentation]]: Some written languages like [[Chinese language|Chinese]], [[Japanese language|Japanese]] and [[Thai language|Thai]] do not have single-word boundaries either, so any significant text [[parsing]] usually requires the identification of word boundaries, which is often a non-trivial task.
; [[Word sense disambiguation]]: Many words have more than one [[meaning]]; we have to select the meaning which makes the most sense in context.
; [[Syntactic ambiguity]]: The [[grammar]] for [[natural language]]s is [[ambiguous]], i.e. there are often multiple possible [[parse tree]]s for a given sentence. Choosing the most appropriate one usually requires [[semantics|semantic]] and contextual information. Specific problem components of syntactic ambiguity include [[sentence boundary disambiguation]].
; Imperfect or irregular input : Foreign or regional accents and vocal impediments in speech; typing or grammatical errors, [[Optical character recognition|OCR]] errors in texts.
; [[Speech acts]] and plans: A sentence can often be considered an action by the speaker. The sentence structure, alone, may not contain enough information to define this action. For instance, a question is actually the speaker requesting some sort of response from the listener. The desired response may be verbal, physical, or some combination. For example, "Can you pass the class?" is a request for a simple yes-or-no answer, while "Can you pass the salt?" is requesting a physical action to be performed. It is not appropriate to respond with "Yes, I can pass the salt," without the accompanying action (although "No" or "I can't reach the salt" would explain a lack of action).
== Statistical NLP ==
{{main|Stochastic grammar|l1=statistical natural language processing}}
Statistical natural-language processing uses [[stochastic]], [[probabilistic]] and [[statistical]] methods to resolve some of the difficulties discussed above, especially those which arise because longer sentences are highly ambiguous when processed with realistic grammars, yielding thousands or millions of possible analyses. Methods for disambiguation often involve the use of [[corpus linguistics | corpora]] and [[Markov model]]s. Statistical NLP comprises all quantitative approaches to automated language processing, including probabilistic modeling, [[information theory]], and [[linear algebra]][ Christopher D. Manning, Hinrich Schutze ''Foundations of Statistical Natural Language Processing'', MIT Press (1999), ISBN 978-0262133609, p. xxxi]. The
technology for statistical NLP comes mainly from [[machine learning]] and [[data mining]], both of which are fields of [[artificial intelligence]]
that involve learning from data.
==Major tasks in NLP==
* [[Automatic summarization]]
* [[Foreign language reading aid]]
* [[Foreign language writing aid]]
* [[Information extraction]]
* [[Information retrieval]]
* [[Machine translation]]
* [[Named entity recognition]]
* [[Natural language generation]]
* [[Natural language understanding]]
* [[Optical character recognition]]
* [[Question answering]]
* [[Speech recognition]]
* [[Spoken dialogue system]]
* [[Text simplification]]
* [[Text to speech]]
* [[Text-proofing]]
== Evaluation of natural language processing ==
===Objectives===
The goal of NLP evaluation is to measure one or more ''qualities'' of an algorithm or a system, in order to determine if (or to what extent) the system answers the goals of its designers, or the needs of its users. Research in NLP evaluation has received considerable attention, because the definition of proper evaluation criteria is one way to specify precisely an NLP problem, going thus beyond the vagueness of tasks defined only as ''language understanding'' or ''language generation''. A precise set of evaluation criteria, which includes mainly evaluation data and evaluation metrics, enables several teams to compare their solutions to a given NLP problem.
===Short history of evaluation in NLP===
The first evaluation campaign on written texts seems to be a campaign dedicated to message understanding in 1987 (Pallet 1998). Then, the Parseval/GEIG project compared phrase-structure grammars (Black 1991). A series of campaigns within Tipster project were realized on tasks like summarization, translation and searching (Hirshman 1998). In 1994, in Germany, the Morpholympics compared German taggers. Then, the Senseval and Romanseval campaigns were conducted with the objectives of semantic disambiguation. In 1996, the Sparkle campaign compared syntactic parsers in four different languages (English, French, German and Italian). In France, the Grace project compared a set of 21 taggers for French in 1997 (Adda 1999). In 2004, during the [[Technolangue/Easy]] project, 13 parsers for French were compared. Large-scale evaluation of dependency parsers were performed in the context of the CoNLL shared tasks in 2006 and 2007. In Italy, the evalita campaign was conducted in 2007 to compare various tools for Italian [http://evalita.itc.it evalita web site]. In France, within the ANR-Passage project (end of 2007), 10 parsers for French were compared [http://atoll.inria.fr/passage/ passage web site].
Adda G., Mariani J., Paroubek P., Rajman M. 1999 L'action GRACE d'évaluation de l'assignation des parties du discours pour le français. Langues vol-2
Black E., Abney S., Flickinger D., Gdaniec C., Grishman R., Harrison P., Hindle D., Ingria R., Jelinek F., Klavans J., Liberman M., Marcus M., Reukos S., Santoni B., Strzalkowski T. 1991 A procedure for quantitatively comparing the syntactic coverage of English grammars. DARPA Speech and Natural Language Workshop
Hirshman L. 1998 Language understanding evaluation: lessons learned from MUC and ATIS. LREC Granada
Pallet D.S. 1998 The NIST role in automatic speech recognition benchmark tests. LREC Granada
===Different types of evaluation===
Depending on the evaluation procedures, a number of distinctions are traditionally made in NLP evaluation.
* Intrinsic vs. extrinsic evaluation
Intrinsic evaluation considers an isolated NLP system and characterizes its performance mainly with respect to a ''gold standard'' result, pre-defined by the evaluators. Extrinsic evaluation, also called ''evaluation in use'' considers the NLP system in a more complex setting, either as an embedded system or serving a precise function for a human user. The extrinsic performance of the system is then characterized in terms of its utility with respect to the overall task of the complex system or the human user.
* Black-box vs. glass-box evaluation
Black-box evaluation requires one to run an NLP system on a given data set and to measure a number of parameters related to the quality of the process (speed, reliability, resource consumption) and, most importantly, to the quality of the result (e.g. the accuracy of data annotation or the fidelity of a translation). Glass-box evaluation looks at the design of the system, the algorithms that are implemented, the linguistic resources it uses (e.g. vocabulary size), etc. Given the complexity of NLP problems, it is often difficult to predict performance only on the basis of glass-box evaluation, but this type of evaluation is more informative with respect to error analysis or future developments of a system.
* Automatic vs. manual evaluation
In many cases, automatic procedures can be defined to evaluate an NLP system by comparing its output with the gold standard (or desired) one. Although the cost of producing the gold standard can be quite high, automatic evaluation can be repeated as often as needed without much additional costs (on the same input data). However, for many NLP problems, the definition of a gold standard is a complex task, and can prove impossible when inter-annotator agreement is insufficient. Manual evaluation is performed by human judges, which are instructed to estimate the quality of a system, or most often of a sample of its output, based on a number of criteria. Although, thanks to their linguistic competence, human judges can be considered as the reference for a number of language processing tasks, there is also considerable variation across their ratings. This is why automatic evaluation is sometimes referred to as ''objective'' evaluation, while the human kind appears to be more ''subjective.''
=== Shared tasks (Campaigns)===
* [[BioCreative]]
* [[Message Understanding Conference]]
* [[Technolangue/Easy]]
* [[Text Retrieval Conference]]
==Standardization in NLP==
An ISO sub-committee is working in order to ease interoperability between [[Lexical resource]]s and NLP programs. The sub-committee is part of [[ISO/TC37]] and is called ISO/TC37/SC4. Some ISO standards are already published but most of them are under construction, mainly on lexicon representation (see [[lexical markup framework|LMF]]), annotation and data category registry.
==References==
{{Reflist}}
==Journals==
* [[Computational_Linguistics_%28journal%29|Computational Linguistics]]
* [[Language Resources and Evaluation]]
* [[Linguistic Issues in Language Technology]]
==Organizations and conferences==
===Associations===
*[[Association for Computational Linguistics]]
*[[Association for Machine Translation in the Americas]]
*[[AFNLP]] - Asian Federation of Natural Language Processing Associations
*[[Australasian Language Technology Association]] (ALTA)
===Conferences===
* [[LREC | Language Resources and Evaluation]]
== Software tools ==
* [[General Architecture for Text Engineering]]
* [[Natural Language Toolkit]] (NLTK): a [[Python (programming language)|Python]] library suite
* [[Expert System S.p.A.]]
* [[OpenNLP]]
== See also ==
* [[AskWiki]]
* [[Biomedical text mining]]
* [[Chatterbot]]
* [[Computational linguistics]]
* [[Computer-assisted reviewing]]
* [[Controlled natural language]]
* [[Human language technology]]
* [[Inform|Inform 7]] programming language
* [[Information retrieval]]
* [[Latent semantic indexing]]
* [[Lexical markup framework]]
* [[Lexxe]]
* [[lojban]] / [[loglan]]
* [[Name resolution]]
* [[Transderivational search]]
* [[Universal translator]] (fictional)
===Implementations===
* [[Infonic]] Sentiment, an NLP-based news analysis software package that reads news flows and provides [[Market sentiment|news sentiment]] signals for the [[algorithmic trading]] systems of [[investment bank]]s
* [[LinguaStream]], a generic platform for NLP experimentation
* [[Modular Audio Recognition Framework|MARF]], a framework for voice and [[Stochastic grammar|statistical NLP processing]]
* [[Nortel Speech Server]], a [[speech processing]] system primarily used for large-vocabulary [[speech recognition]], natural-language understanding, [[text-to-speech]], and [[speaker verification]]
==External links==
===Resources===
* [http://www.cs.technion.ac.il/~gabr/resources/resources.html Resources for Text, Speech and Language Processing]
* [http://www.proxem.com/Resources/tabid/54/Default.aspx A comprehensive list of resources, classified by category]
* [https://kitwiki.csc.fi/twiki/bin/view/FiLT/FiLTWikiEn Language Technology Documentation Centre in Finland (FiLT)]
* [http://specgram.com/CLIII.4/08.phlogiston.cartoon.zhe.html Some simple examples of NLP-hard utterances.]
===Organizations===
* [http://nlp.stanford.edu/ The Stanford Natural Language Processing Group]
[[Category:Computational linguistics]]
[[Category:Speech recognition]]
[[Category:Natural language processing|*]]
[[ar:معالجة اللغات الطبيعية]]
[[zh-min-nan:Chū-jiân gú-giân chhú-lí]]
[[be:Апрацоўка натуральнай мовы]]
[[be-x-old:Апрацоўка натуральнай мовы]]
[[bg:Обработка на естествен език]]
[[ca:Processament de llenguatge natural]]
[[cs:Zpracování přirozeného jazyka]]
[[da:Sprogteknologi]]
[[de:Natural language processing]]
[[es:Procesamiento de lenguajes naturales]]
[[eo:Komputila lingvistiko]]
[[eu:Lengoaia naturalen prozesamendua]]
[[fa:پردازش زبانهای طبیعی]]
[[fr:Traitement automatique des langues]]
[[gl:Procesamento da linguaxe natural]]
[[it:Elaborazione del linguaggio naturale]]
[[he:עיבוד שפה טבעית]]
[[lt:Natūralios kalbos apdorojimas]]
[[ja:自然言語処理]]
[[ko:자연 언어 처리]]
[[pl:Analiza języka naturalnego]]
[[pt:Processamento de linguagem natural]]
[[ru:Обработка естественного языка]]
[[simple:Natural language processing]]
[[sr:Obrada prirodnih jezika]]
[[th:การประมวลผลภาษาธรรมชาติ]]
[[tr:Doğal dil işleme]]
[[uk:Обробка природної мови]]
[[zh:自然语言处理]]