This peer-reviewed paper was published in the 18th International Workshop on Ontology Matching, collocated with the 22nd International Semantic Web Conference (ISWC 2023). Read the full paper to explore the methodology, benchmarks, and evaluation results in detail.
Abstract
In this paper, a new perspective is suggested for unsupervised Ontology Matching (OM) or Ontology
Alignment (OA) by treating it as a translation task. Ontologies are represented as graphs, and the
translation is performed from a node in the source ontology graph to a path in the target ontology
graph.
The proposed framework, Truveta Mapper (TM), leverages a multi-task sequence-to-sequence
transformer model to perform alignment across multiple ontologies in a zero-shot, unified and end-to-end manner. Multi-tasking enables the model to implicitly learn the relationship between different
ontologies via transfer-learning without requiring any explicit cross-ontology manually labeled data.
This also enables the formulated framework to outperform existing solutions for both runtime latency
and alignment quality.
The model is pre-trained and fine-tuned only on publicly available text corpus and
inner-ontologies data. The proposed solution outperforms state-of-the-art approaches, Edit-Similarity,
LogMap, AML, BERTMap, and the recently presented new OM frameworks in Ontology Alignment
Evaluation Initiative (OAEI22), offers log-linear complexity, and overall makes the OM task efficient and
more straightforward without much post-processing involving mapping extension or mapping repair.
We are open sourcing our solution.
