Book title: Probabilistic Semantic Web
Reasoning and Learning
Author: Riccardo Zese, University of Ferrara, Italy
Publisher: AKA-Verlag & IOS Press
Series: Studies on the Semantic Web
ISBN: 978-1-61499-733-7
e-ISBN: 978-1-61499-734-4
Get it from: the publisher, Amazon

Sample content: books.google.it

Bibtex entry

Slides for some of the chapters are available here.

Probabilistic Semantic Web

Abstract

The management of the uncertainty in the Semantic Web is of foremost importance given the nature and origin of the available data. This book presents a probabilistic semantics for knowledge bases, DISPONTE, which is inspired by the distribution semantics of Probabilistic Logic Programming. The book also describes approaches for inference and learning. In particular, it discusses 3 reasoners and 2 learning algorithms. BUNDLE and TRILL are able to find explanations for queries and compute their probability with regard to DISPONTE KBs while TRILLP compactly represents explanations using a Boolean formula and computes the probability of queries. The system EDGE learns the parameters of axioms of DISPONTE KBs. To reduce the computational cost, EDGEMR performs distributed parameter learning. LEAP learns both the structure and parameters of KBs, with LEAPMR using EDGEMR for reducing the computational cost. The algorithms provide effective techniques for dealing with uncertain KBs and have been widely tested on various datasets and compared with state of the art systems.

Keywords: Probabilistic semantic web, distribution semantics, artificial intelligence, machine learning

Table of Contents

  1. Part I. Introduction
    1. 1. Semantic Web
      1. 1.1 Description Logics and Semantic Web
      2. 1.2 The Current Vision of the Semantic Web 3
    2. 2. Probability
      1. 2.1 Probabilistic Inference
      2. 2.2 Probabilistic Learning
    3. 3. Aims of the Thesis
    4. 4. Structure of the Thesis
  2. Part II. Description Logics
    1. 5. Foundations of Description Logics
    2. 6. Description Logics’ Characteristics
      1. 6.1 Concept and Role Constructors
      2. 6.2 Family of DLs
      3. 6.3 Knowledge Base
        1. 6.3.1 TBox
        2. 6.3.2 RBox
        3. 6.3.3 ABox
      4. 6.4 Semantics
    3. 7. Significant Examples of Description Logics
    4. 8. OWL: the Web Ontology Language
    5. 9. Inference in Description Logics
      1. 9.1 Approaches to Compute Explanations
        1. 9.1.1 Solving min-a-enum: The Standard Definition
        2. 9.1.2 Resolving min-a-enum: Pinpointing Formula
  3. Part III. A Probabilistic Semantics for Description Logics
    1. 10. Distribution Semantics
      1. 10.1 Formal Definition
      2. 10.2 PLP Languages under the Distribution Semantics
        1. 10.2.1 Logic Programming
        2. 10.2.2 LPAD
        3. 10.2.3 ProbLog
      3. 10.3 Inference in Probabilistic Logic Programming
        1. 10.3.1 ProbLog Inference System
        2. 10.3.2 PITA
      4. 10.4 Learning in Probabilistic Logic Programming
    2. 11. DISPONTE
    3. 12. Probabilistic Description Logics
  4. Part IV. Inference in Probabilistic DLs
    1. 13. Inference
      1. 13.1 Splitting Algorithm
      2. 13.2 Binary Decision Diagrams
    2. 14. BUNDLE 85
    3. 15. TRILL
      1. 15.1 TRILL on SWISH
    4. 16. TRILLP
    5. 17. Complexity of Inference
    6. 18. Related Inference Systems
    7. 19. Experiments
      1. 19.1 BUNDLE: Comparison with PRONTO
      2. 19.2 BUNDLE: Not Entailed Queries
      3. 19.3 BUNDLE: Inference with Limited Number of Explanations
      4. 19.4 BUNDLE: Scalability
      5. 19.5 TRILL, TRILLP & BUNDLE: Comparing Different Approaches
      6. 19.6 Discussion
  5. Part V. Learning in Probabilistic DLs
    1. 20. Learning
    2. 21. EDGE: Parameter Learning
      1. 21.1 Expectation Maximization Algorithm
      2. 21.2 EDGE
    3. 22. LEAP: Structure Learning
      1. 22.1 CELOE
      2. 22.2 LEAP
    4. 23. Distributed Learning
      1. 23.1 Map Reduce Approach
      2. 23.2 The Message Passing Interface Standard
      3. 23.3 EDGEMR
      4. 23.4 LEAPMR
    5. 24. Related Learning Systems
    6. 25. Experiments
      1. 25.1 EDGE: Comparison with Association Rules
      2. 25.2 LEAP & EDGE: a Comparison Between Different Learning Problems
      3. 25.3 EDGEMR : Parallelization Speedup
      4. 25.4 EDGEMR : Memory Consumption
      5. 25.5 LEAPMR : Parallelization Speedup
      6. 25.6 Discussion
  6. Part VI. Summary and Future Work
    1. 26. Conclusion
    2. 27. Future Work
    3. Bibliography