Join thousands of book lovers
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.You can, at any time, unsubscribe from our newsletters.
This book describes the Spoken Language Translator (SLT), one of the first major projects in the area of automatic speech translation.
Kathleen McKeown identifies and formalises principles of discourse so that they can be used in a computational process. The text generation theory she describes has been embodied in a computer program, TEXT, which, given a question, can produce a paragraph length response. An Appendix to the book provides examples of the TEXT system in operation.
Of great interest to those working in the fields of computational linguistics, logic, semantics, artificial intelligence and linguistics generally, this 1992 collection includes both tutorial and advanced material in order to orient the uninitiated to the concepts and problems that are at issue.
This book presents a collection of papers on the issue of focus in its broadest sense. While commonly being considered as related to phenomena such as presupposition and anaphora, focusing is much more widely spread, and it is this pervasiveness that this collection addresses.
Of great interest to those working in the fields of computational linguistics, logic, semantics, artificial intelligence and linguistics generally, this 1992 collection includes both tutorial and advanced material in order to orient the uninitiated to the concepts and problems that are at issue.
This book presents a collection of papers on the issue of focus in its broadest sense. While commonly being considered as related to phenomena such as presupposition and anaphora, focusing is much more widely spread, and it is this pervasiveness that this collection addresses.
The relation between ontologies and language is currently at the forefront of natural language processing (NLP). This book focuses on the technology involved in enabling integration between lexical resources and semantic technologies. It will be of interest to researchers and graduate students in NLP, computational linguistics, and knowledge engineering.
This collection of contributions addresses the problem of words and their meaning. This remains a difficult and controversial area within linguistics, philosophy and artificial intelligence. The title aims to provide answers based on empirical linguistics methods that are relevant across disciplines and accessible to researchers from different backgrounds.
Relational Models of the Lexicon not only provides an invaluable survey of research in relational semantics, but offers a stimulus for potential research advances in semantics, natural language processing and knowledge representation.
This book describes the Spoken Language Translator (SLT), one of the first major projects in the area of automatic speech translation.
Memory-based language processing - a machine learning and problem solving method for language technology - is based on the idea that the direct reuse of examples using analogical reasoning is more suited for solving language processing problems than the application of rules extracted from those examples. This book discusses the theory and practice of memory-based language processing, showing its comparative strengths over alternative methods of language modelling. Language is complex, with few generalizations, many sub-regularities and exceptions, and the advantage of memory-based language processing is that it does not abstract away from this valuable low-frequency information. By applying the model to a range of benchmark problems, the authors show that for linguistic areas ranging from phonology to semantics, it produces excellent results. They also describe TiMBL, a software package for memory-based language processing. The first comprehensive overview of the approach, this book will be invaluable for computational linguists, psycholinguists and language engineers.
Editors Madeleine Bates and Ralph Weischedel have invited capable researchers in the field of natural language processing to address theoretical or applied work that has been achieved in the past.
This book offers a comprehensive overview of the human language technology field.
The lexicon is now a major focus of research in computational linguistics and natural language processing (NLP). This collection describes techniques of lexical representation within a unification-based framework and their linguistic application, concentrating on the issue of structuring the lexicon using inheritance and defaults.
Computational Lexical Semantics is one of the first volumes to provide models for the creation of various kinds of computerized lexicons.
A collection of new papers by leading researchers on natural language parsing.
Margaret Masterman was a pioneer in the field of computational linguistics. Working in the earliest days of language processing by computer, she believed that meaning, not grammar, was the key to understanding languages, and that machines could determine the meaning of sentences. She was able, even on simple machines, to undertake sophisticated experiments in machine translation, and carried out important work on the use of semantic codings and thesauri to determine the meaning structure of texts. This volume brings together Masterman's groundbreaking papers for the first time. Through his insightful commentaries, Yorick Wilks argues that Masterman came close to developing a computational theory of language meaning based on the ideas of Wittgenstein, and shows the importance of her work in the philosophy of science and the nature of iconic languages. Of key interest in computational linguistics and artificial intelligence, it will remind scholars of Masterman's significant contribution to the field.
This book explains how to build Natural Language Generation systems - computer software systems which automatically generate understandable texts in English or other human languages. The book covers the algorithms and representations needed to perform the core tasks of document planning, microplanning, and surface realization.
Editors Madeleine Bates and Ralph Weischedel have invited capable researchers in the field of natural language processing to address theoretical or applied work that has been achieved in the past.
A primary problem in the area of natural language processing has been that of semantic analysis. Semantic Processing for Finite Domains presents an approach to the computational processing of English text that combines current theories of knowledge representation and reasoning in Artificial Intelligence with the latest linguistic views of lexical semantics.
This study explores an approach to text generation that interprets systemic grammar as a computational representation. Terry Patten demonstrates that systemic grammar can be easily and automatically translated into current AI knowledge representations and efficiently processed by the same knowledge-based techniques currently exploited by expert systems.
This collection of contributions addresses the problem of words and their meaning. This remains a difficult and controversial area within linguistics, philosophy and artificial intelligence. The title aims to provide answers based on empirical linguistics methods that are relevant across disciplines and accessible to researchers from different backgrounds.
Ralph Grishman provides an integrated introduction and valuable survey to the field of computer analysis of language. It tackles syntax analysis, semantic analysis, text analysis and natural language generation through a clear exposition and exercises. This book is written for readers with some background in computer science and finite mathematics.
Drawing on case studies around the world, this book develops a formal computational theory of writing systems and relates it to psycholinguistic results. Sprout then proposes a taxonomy of writing systems. The book will be of interest to students and researchers in theoretical and computational linguistics, psycholinguistics and speech technology.
Logics of Conversation presents a dynamic semantic framework called Segmented Discourse Representation Theory, or SDRT, where the interaction between discourse coherence and discourse interpretation is explored in a logically precise manner.
This comprehensive introduction to all the core areas and many emerging themes of sentiment analysis approaches the problem from a natural-language-processing angle. The author explains the underlying structure and the language constructs that are commonly used to express opinions and sentiments and presents computational methods to analyze and summarize opinions.
On scrutinising how we refer to things in conversation, we find that we rarely state explicitly what object we mean, although we expect an interlocutor to discern it. Dr Kronfield provides an answer to the two questions; how do we successfully refer; and how can a computer be programmed to achieve this?.
This book presents computational mechanisms for solving common language interpretation problems.
A theoretically motivated foundation for semantic interpretation by computer, showing how this framework helps resolve lexical and syntactic ambiguities. The approach is interdisciplinary, drawing on research in computational linguistics, AI, Montague semantics, and cognitive psychology.
An investigation into the problems of generating natural language utterances to satisfy specific goals the speaker has in mind.
Sign up to our newsletter and receive discounts and inspiration for your next reading experience.
By signing up, you agree to our Privacy Policy.