Recognizing textual entailment [electronic resource] : models and applications / Ido Dagan [and others]

Saved in:
Bibliographic Details
Online Access: Full Text (via Morgan & Claypool)
Full Text (via Morgan & Claypool)
Other Authors: Dagan, Ido
Format: Electronic eBook
Language:English
Published: San Rafael, Calif. (1537 Fourth Street, San Rafael, CA 94901 USA) : Morgan & Claypool, ©2013.
Series:Synthesis lectures on human language technologies (Online) ; # 23.
Subjects:
Table of Contents:
  • 1. Textual entailment
  • 1.1 Motivation and rationale
  • 1.2 The recognizing textual entailment task
  • 1.2.1 The scope of textual entailment
  • 1.2.2 The role of background knowledge
  • 1.2.3 Textual entailment versus linguistic notion of entailment
  • 1.2.4 Extending entailment recognition with contradiction detection
  • 1.2.5 The challenge and opportunity of RTE
  • 1.3 Applications of textual entailment solutions
  • 1.3.1 Question answering
  • 1.3.2 Relation extraction
  • 1.3.3 Text summarization
  • 1.3.4 Additional applications
  • 1.4 Textual entailment evaluation
  • 1.4.1 RTE-1 through RTE-5
  • 1.4.2 RTE-6 and RTE-7
  • 1.4.3 Other evaluations of textual entailment technology
  • 1.4.4 Future directions for entailment evaluation
  • 2. Architectures and approaches
  • 2.1 An intuitive model for RTE
  • 2.2 Levels of representation in RTE systems
  • 2.2.1 Lexical-level RTE
  • 2.2.2 Structured representations for RTE
  • 2.3 Inference in RTE systems
  • 2.3.1 Similarity-based approaches
  • 2.3.2 Alignment-focused approaches
  • 2.3.3 "Proof Theoretic" RTE
  • 2.3.4 Hybrid approaches
  • 2.4 A conceptual architecture for RTE systems
  • 2.4.1 Preprocessing
  • 2.4.2 Enrichment
  • 2.4.3 Candidate alignment generation
  • 2.4.4 Alignment selection
  • 2.4.5 Classification
  • 2.4.6 Main decision-making approaches
  • 2.5 Emergent challenges
  • 2.5.1 Knowledge acquisition bottleneck: acquiring rules
  • 2.5.2 Noise-tolerant RTE architectures
  • 3. Alignment, classification, and learning
  • 3.1 An abstract scheme for textual entailment decisions
  • 3.2 Generating candidates and selecting alignments
  • 3.2.1 Anchors: linking texts and hypotheses
  • 3.2.2 Formalizing candidate alignment generation and alignment
  • 3.3 Classifiers, feature spaces, and machine learning
  • 3.4 Similarity feature spaces
  • 3.4.1 Token-level similarity features
  • 3.4.2 Structured similarity features
  • 3.4.3 Entailment trigger feature spaces
  • 3.4.4 Rewrite rule feature spaces
  • 3.4.5 Discussion
  • 3.5 Learning alignment functions
  • 3.5.1 Learning alignment from gold-standard data
  • 3.5.2 Learning entailment with a latent alignment
  • 4. Case studies
  • 4.1 Edit distance-based RTE
  • 4.1.1 Open source tree edit-based RTE system
  • 4.1.2 Tree edit distance with expanded edit types
  • 4.2 Logical representation and inference
  • 4.2.1 Representation
  • 4.2.2 Logical inference with abduction
  • 4.2.3 Logical inference with shallow backoff system
  • 4.3 Transformation-based approaches
  • 4.3.1 Transformation-based approach with integer linear programming
  • 4.3.2 Syntactic transformation with linguistically motivated rules
  • 4.3.3 Syntactic transformation with a probabilistic calculus
  • 4.3.4 Syntactic transformation with learned operation costs
  • 4.3.5 Natural logic
  • 4.4 Alignment-focused approaches
  • 4.4.1 Learning alignment selection independently of entailment
  • 4.4.2 Hand-coded alignment function
  • 4.4.3 Leveraging multiple alignments for RTE
  • 4.4.4 Aligning discourse commitments
  • 4.4.5 Latent alignment inference for RTE
  • 4.5 Paired similarity approaches
  • 4.6 Ensemble systems
  • 4.6.1 Weighted expert approach
  • 4.6.2 Selective expert approach
  • 4.7 Discussion
  • 5. Knowledge acquisition for textual entailment
  • 5.1 Scope of target knowledge
  • 5.2 Acquisition from manually constructed knowledge resources
  • 5.2.1 Mining computation-oriented knowledge resources
  • 5.2.2 Mining human-oriented knowledge resources
  • 5.3 Corpus-based knowledge acquisition
  • 5.3.1 Distributional similarity methods
  • 5.3.2 Co-occurrence-based methods
  • 5.3.3 Acquisition from parallel and comparable corpora
  • 5.4 Integrating multiple sources of evidence
  • 5.4.1 Integrating multiple information sources
  • 5.4.2 Simultaneous global learning of multiple rules
  • 5.5 Context sensitivity of entailment rules
  • 5.6 Concluding remarks and future directions
  • 6. Research directions in RTE
  • 6.1 Development of better/more flexible preprocessing tool chain
  • 6.2 Knowledge acquisition and specification
  • 6.3 Open source platform for textual entailment
  • 6.4 Task elaboration and phenomenon-specific RTE resources
  • 6.5 Learning and inference: efficient, scalable algorithms
  • 6.6 Conclusion
  • A. Entailment phenomena
  • Bibliography
  • Authors' biographies.