Show simple item record

dc.contributor.authorMachot, Fadi Al
dc.contributor.authorHorsch, Martin Thomas
dc.contributor.authorUllah, Habib
dc.date.accessioned2025-05-22T08:47:37Z
dc.date.available2025-05-22T08:47:37Z
dc.date.issued2025-05-16
dc.description.abstractGrowing concerns over the lack of transparency in AI, particularly in high-stakes fields like healthcare and finance, drive the need for explainable and trustworthy systems. While Large Language Models (LLMs) perform exceptionally well in generating accurate outputs, their “black box” nature poses significant challenges to transparency and trust. To address this, the paper proposes the TranspNet pipeline, which integrates symbolic AI with LLMs. By leveraging domain expert knowledge, retrieval-augmented generation (RAG), and formal reasoning frameworks like Answer Set Programming (ASP), TranspNet enhances LLM outputs with structured reasoning and verification. This approach strives to help AI systems deliver results that are as accurate, explainable, and trustworthy as possible, aligning with regulatory expectations for transparency and accountability. TranspNet provides a solution for developing AI systems that are reliable and interpretable, making it suitable for real-world applications where trust is critical.en_US
dc.identifier.citationMachot, Horsch, Ullah: Building trustworthy AI: Transparent AI systems via language models, ontologies, and logical reasoning (TranspNet). In: Machot, Horsch, Scholze. Designing the Conceptual Landscape for a XAIR Validation Infrastructure: Proceedings of the International Workshop on Designing the Conceptual Landscape for a XAIR Validation Infrastructure, DCLXVI 2024, Kaiserslautern, Germany, 2025. Springer Nature p. 25-34en_US
dc.identifier.cristinIDFRIDAID 2380314
dc.identifier.doihttps://doi.org/10.1007/978-3-031-89274-5_3
dc.identifier.isbn9783031892738
dc.identifier.issn2367-3370
dc.identifier.issn2367-3389
dc.identifier.urihttps://hdl.handle.net/10037/37117
dc.language.isoengen_US
dc.publisherSpringer Natureen_US
dc.relation.projectIDEU – Horisont Europa (EC/HEU): 101138510 (DigiPass CSA)en_US
dc.relation.projectIDEU – Horisont Europa (EC/HEU): 101137725 (BatCAT)en_US
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/HORIZON/101138510/Germany/Harmonization of Advanced Materials Ecosystems serving strategic Innovation Markets to pave the way to a Digital Materials & Product Passport/DigiPass/en_US
dc.relation.projectIDinfo:eu-repo/grantAgreement/EC/HORIZON/101137725/Norway/Battery Cell Assembly Twin/BatCAT/en_US
dc.rights.accessRightsopenAccessen_US
dc.rights.holderCopyright 2025 The Author(s)en_US
dc.titleBuilding trustworthy AI: Transparent AI systems via language models, ontologies, and logical reasoning (TranspNet)en_US
dc.type.versionacceptedVersionen_US
dc.typeChapteren_US
dc.typeBokkapittelen_US


File(s) in this item

Thumbnail

This item appears in the following collection(s)

Show simple item record