Natural language processing
Published:
Natural language processing (NLP) is a field of computer science, artificial intelligence, and computational linguistics which tries the understand human (natural) languages. It is a field strongly related with human-computer interaction, computational linguistics.
The main tasks related with NLP are:
- Automatic summarization: produce a readable summary of a chunk of text. Often used to provide summaries of text of a known type, such as articles in the financial section of a newspaper.
- Coreference resolution: given a sentence or larger chunk of text, determine which words (“mentions”) refer to the same objects (“entities”). Anaphora resolution is a specific example of this task, and is specifically concerned with matching up pronouns with the nouns or names that they refer to.
- Discourse analysis: this rubric includes a number of related tasks. One task is identifying the discourse structure of connected text, i.e. the nature of the discourse relationships between sentences (e.g. elaboration, explanation, contrast). Another possible task is recognizing and classifying the speech acts in a chunk of text (e.g. yes-no question, content question, statement, assertion, etc.).
- Machine translation: automatically translate text from one human language to another one. This is one of the most difficult problems, it is include in a group termed colloquially as “AI-complete” problem. That’s means that required not only the text but also the knowledge that humans possess as grammar, semantics, facts about the real world.
- Morphological segmentation: separate words into individual morphemes and identify the class of the morphemes. The difficulty of this task depends greatly on the complexity of the morphology (i.e. the structure of words) of the language being considered. English has fairly simple morphology, especially inflectional morphology, and thus it is often possible to ignore this task entirely and simply model all possible forms of a word (e.g. “open, opens, opened, opening”) as separate words.
- Named entity recognition (NER): given a stream of text, determine which items in the text map to proper names, such as people or places, and what the type of each such name is (e.g. person, location, organization). Note that, although capitalization can aid in recognizing named entities in languages such as English, this information cannot aid in determining the type of named entity, and in any case is often inaccurate or insufficient.
- Natural language generation: convert information from computer databases into readable human language.
- Natural language understanding: convert chunks of text into more formal representations such as first-order logic structures that are easier for computer programs to manipulate. Natural language understanding involves the identification of the intended semantic from the multiple possible semantics which can be derived from a natural language expression which usually takes the form of organized notations of natural languages concepts.
- Optical character recognition (OCR): given an image representing printed text, determine the corresponding text.
- Part-of-speech tagging: given a sentence, determine the part of speech for each word. Many words, especially common ones, can serve as multiple parts of speech. Some languages have more such ambiguity than others. Languages with little inflectional morphology, such as English are particularly prone to such ambiguity. Chinese is prone to such ambiguity because it is a tonal language during verbalization. Such inflection is not readily conveyed via the entities employed within the orthography to convey intended meaning.
- Parsing: determine the parse tree (grammatical analysis) of a given sentence. The grammar for natural languages is ambiguous and typical sentences have multiple possible analyses, some of them without any sense for humans.
- Question answering: given a human-language question, determine its answer. There are questions with close answer but others are open-ended questions which are more difficult to answer.
- Relationship extraction: given a chunk of text, identify the relationships among named entities (e.g. who is married to whom).
- Sentence breaking (also known as sentence boundary disambiguation): given a chunk of text, find the sentence boundaries. Sentence boundaries are often marked by periods or other punctuation marks, but these same characters can serve other purposes.
- Sentiment analysis: extract subjective information usually from a set of documents, often using online reviews to determine “polarity” about specific objects. It is especially useful for identifying trends of public opinion in the social media, for the purpose of marketing.
- Speech recognition: given a sound clip of a person or people speaking, determine the textual representation of the speech. This is the opposite of text to speech and is one of the extremely difficult problems colloquially termed “AI-complete”. In natural speech there are hardly any pauses between successive words, and thus speech segmentation is a necessary subtask of speech recognition. Note also that in most spoken languages, the sounds representing successive letters blend into each other in a process termed coarticulation, so the conversion of the analog signal to discrete characters can be a very difficult process.
- Speech segmentation: given a sound clip of a person or people speaking, separate it into words. A subtask of speech recognition and typically grouped with it.
- Topic segmentation and recognition: given a chunk of text, separate it into segments each of which is devoted to a topic, and identify the topic of the segment.
- Word segmentation: separate a chunk of continuous text into separate words.
- Word sense disambiguation: many words have more than one meaning; we have to select the meaning which makes the most sense in context. For this problem, we are typically given a list of words and associated word senses, e.g. from a dictionary or from an online resource such as WordNet.
- Information retrieval (IR): this is concerned with storing, searching and retrieving information. It is a separate field within computer science (closer to databases), but IR relies on some NLP methods (for example, stemming). Some current research and applications seek to bridge the gap between IR and NLP.
- Information extraction (IE): this is concerned in general with the extraction of semantic information from text. This covers tasks such as named entity recognition, Coreference resolution, relationship extraction, etc.
- Speech processing: this covers speech recognition, text-to-speech and related tasks.
See also
Computational intelligence, Mathematical optimization, Computer vision, Artificial Intelligence, Data Analysis, Machine Learning
Material
- http://research.google.com/pubs/NaturalLanguageProcessing.html
- http://www.nltk.org/
Papers
- Bates, M (1995). Models of natural language understanding. Proceedings of the National Academy of Sciences of the United States of America 92 (22): 9977-9982
Books
- Manning, Christopher D.; Schütze, Hinrich (1999). Foundations of Statistical Natural Language Processing. MIT Press (MA)
- Jurafsky, Dan; Martin, James H. (2000). Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics and Speech Recognition. Prentice Hall
- Bird, Steven; Klein, Ewan; Loper, Edward (2009) Natural Language Processing with Python. O’Reilly Media