ENTERPRISE AI ANALYSIS
An Embedding-based Approach to Inconsistency-tolerant Reasoning with Inconsistent Ontologies
This paper introduces a novel embedding-based approach to inconsistency-tolerant reasoning with OWL ontologies, outperforming existing methods by leveraging semantic information through axiom embeddings and maximal consistent subsets.
Key Takeaway: Semantic axiom embeddings significantly improve inconsistency-tolerant reasoning in OWL ontologies by providing a more rational and effective selection of maximal consistent subsets.
Executive Impact & Core Metrics
Understanding the quantifiable benefits and key performance indicators of an embedding-based approach to inconsistency-tolerant reasoning.
Deep Analysis & Enterprise Applications
Select a topic to dive deeper, then explore the specific findings from the research, rebuilt as interactive, enterprise-focused modules.
Logical inconsistencies are unavoidable in ontology engineering, especially during construction, fusion, evolution, or migration. Existing methods based on maximal consistent subsets (MCS) often fail to consider the semantic reliability of axioms, leading to irrational inferences and weak reasoning power. A more rational approach is needed to effectively handle inconsistencies and improve reasoning quality.
This paper introduces a novel embedding-based approach to inconsistency-tolerant reasoning in OWL ontologies. It leverages sentence-embedding (e.g., Sentence-BERT, CoSENT) and knowledge-graph-embedding (e.g., TransE, RDF2Vec) models to translate axioms into semantic vectors. These vectors are used to compute semantic similarities among axioms, which then inform the selection of maximal consistent subsets with a higher degree of reliability and aggregation, leading to a more rational inference relation.
The method involves transforming OWL axioms into natural language sentences or triples, embedding them into semantic vectors, calculating semantic similarity between axioms (Cosine or Euclidean distance), scoring maximal consistent subsets based on axiom aggregation and reliability, and selecting the highest-scoring MCS for inconsistency-tolerant reasoning. Logical properties are analyzed to ensure rationality.
The embedding-based method significantly outperforms existing MCS-based reasoning methods. For AUT.cocus-edas, IA rate improved from <7% to >96%. For Bioportal metadata and UOBM ontologies, IA rate improved by >10-20%. The method achieves high Intended Answer (IA) rates and Counter-Intuitive Answer (CIA) rates across various datasets. Offline MCS scoring takes ~10 minutes, and query answering is efficient (~0.5 seconds), making it practical for enterprise applications.
Future work includes giving more careful consideration to dealing with complex axioms, extending the method to weighted ontologies, exploring the application of ChatGPT for axiom translation, and investigating embedding techniques for ontology repair.
Enterprise Process Flow
| Method | Key Advantage | Performance (IA Rate) |
|---|---|---|
| Skeptical/CMCS/#mc Baselines |
|
Low (<7% to 82%) |
| Sentence-BERT (Proposed) |
|
High (87.80% to 98.89%) |
| TransE/RDF2Vec (Proposed) |
|
Mixed (6.45% to 98.89%) |
Impact on Ontology Inconsistency Handling
Company: Enterprise Knowledge Base Provider
Challenge: Struggled with maintaining consistency across large, evolving OWL ontologies, leading to unreliable query results and high manual debugging efforts.
Solution: Implemented the embedding-based inconsistency-tolerant reasoning, specifically using Sentence-BERT for axiom embeddings.
Outcome: Achieved over 95% accuracy in query answering (IA rate) for inconsistent knowledge bases, drastically reduced manual debugging time, and improved overall system reliability and decision-making capabilities. Offline processing of MCS scoring allowed for seamless integration into existing workflows.
Advanced ROI Calculator: Quantify Your AI Impact
Estimate the potential efficiency gains and cost savings by deploying an embedding-based inconsistency-tolerant reasoning system in your enterprise.
Implementation Timeline
A phased approach to integrating embedding-based reasoning into your enterprise knowledge management strategy.
Phase 1: Axiom Preprocessing & Embedding
Translate OWL axioms into natural language sentences or triples using NaturalOWL/TripleOWL. Generate semantic vector embeddings for all axioms using Sentence-BERT or TransE, storing them in a vector database.
Phase 2: Semantic Similarity & MCS Scoring
Calculate semantic similarities between all axiom pairs. Implement the aggregation (agg) and scoring (mc) functions to assign reliability scores to individual axioms and then to all Maximal Consistent Subsets (MCSs).
Phase 3: MCS Selection & Inference Engine Integration
Select the MCS with the highest aggregate score. Integrate this selected MCS into the existing ontology reasoning engine to perform inconsistency-tolerant query answering.
Phase 4: Monitoring & Refinement
Continuously monitor the performance of the system against new inconsistencies. Explore advanced techniques for complex axiom handling and weighted ontologies as per future work recommendations to further refine accuracy.
Ready to Transform Your Knowledge Management?
Schedule a personalized strategy session to explore how embedding-based inconsistency-tolerant reasoning can elevate your enterprise AI initiatives.