dc.contributor.author | Machot, Fadi Al | |
dc.contributor.author | Horsch, Martin Thomas | |
dc.contributor.author | Ullah, Habib | |
dc.date.accessioned | 2025-05-22T08:47:37Z | |
dc.date.available | 2025-05-22T08:47:37Z | |
dc.date.issued | 2025-05-16 | |
dc.description.abstract | Growing concerns over the lack of transparency in AI, particularly in high-stakes fields like healthcare and finance, drive the need for explainable and trustworthy systems. While Large Language Models (LLMs) perform exceptionally well in generating accurate outputs, their “black box” nature poses significant challenges to transparency and trust. To address this, the paper proposes the TranspNet pipeline, which integrates symbolic AI with LLMs. By leveraging domain expert knowledge, retrieval-augmented generation (RAG), and formal reasoning frameworks like Answer Set Programming (ASP), TranspNet enhances LLM outputs with structured reasoning and verification. This approach strives to help AI systems deliver results that are as accurate, explainable, and trustworthy as possible, aligning with regulatory expectations for transparency and accountability. TranspNet provides a solution for developing AI systems that are reliable and interpretable, making it suitable for real-world applications where trust is critical. | en_US |
dc.identifier.citation | Machot, Horsch, Ullah: Building trustworthy AI: Transparent AI systems via language models, ontologies, and logical reasoning (TranspNet). In: Machot, Horsch, Scholze. Designing the Conceptual Landscape for a XAIR Validation Infrastructure: Proceedings of the International Workshop on Designing the Conceptual Landscape for a XAIR Validation Infrastructure, DCLXVI 2024, Kaiserslautern, Germany, 2025. Springer Nature p. 25-34 | en_US |
dc.identifier.cristinID | FRIDAID 2380314 | |
dc.identifier.doi | https://doi.org/10.1007/978-3-031-89274-5_3 | |
dc.identifier.isbn | 9783031892738 | |
dc.identifier.issn | 2367-3370 | |
dc.identifier.issn | 2367-3389 | |
dc.identifier.uri | https://hdl.handle.net/10037/37117 | |
dc.language.iso | eng | en_US |
dc.publisher | Springer Nature | en_US |
dc.relation.projectID | EU – Horisont Europa (EC/HEU): 101138510 (DigiPass CSA) | en_US |
dc.relation.projectID | EU – Horisont Europa (EC/HEU): 101137725 (BatCAT) | en_US |
dc.relation.projectID | info:eu-repo/grantAgreement/EC/HORIZON/101138510/Germany/Harmonization of Advanced Materials Ecosystems serving strategic Innovation Markets to pave the way to a Digital Materials & Product Passport/DigiPass/ | en_US |
dc.relation.projectID | info:eu-repo/grantAgreement/EC/HORIZON/101137725/Norway/Battery Cell Assembly Twin/BatCAT/ | en_US |
dc.rights.accessRights | openAccess | en_US |
dc.rights.holder | Copyright 2025 The Author(s) | en_US |
dc.title | Building trustworthy AI: Transparent AI systems via language models, ontologies, and logical reasoning (TranspNet) | en_US |
dc.type.version | acceptedVersion | en_US |
dc.type | Chapter | en_US |
dc.type | Bokkapittel | en_US |