Improving decision transparency in autonomous maritime collision avoidance
Permanent link
https://hdl.handle.net/10037/36162Date
2025-01-06Type
Journal articleTidsskriftartikkel
Peer reviewed
Abstract
Recent advances in artificial intelligence (AI) have laid the foundation for developing a sophisticated collision avoidance
system for use in maritime autonomous surface ships, potentially enhancing maritime safety and decreasing the navigator’s
workload. Understanding the reasoning behind an AI system is inherently difficult. To help the human operator understand
what the AI system is doing and its reasoning, we employed a human-centered design approach to develop transparency
layers that visualize different aspects of an operation by displaying labels, diagrams, and simulations intended to improve the
user’s situation awareness (SA). The effectiveness and usability of the different layers were investigated through simulatorbased experiments involving nautical students and licensed navigators. The SA global assessment technique was utilized to
measure navigators’ SA. User satisfaction was also measured, and effective layers were identified. The results indicate that
the transparency layers that enhance SA Level 3 are preferred by participants, suggesting a potential for improving human–
AI compatibility. However, the introduction of transparency layers does not uniformly enhance SA across all levels, and a
tendency toward passive decision-making was observed. The findings highlight the importance of balancing information
presentation with the user’s cognitive capabilities and suggest that further research is needed to refine transparency layers
for optimized human–AI compatibility in maritime navigation.
Publisher
Springer NatureCitation
Madsen A, Brandsæter A, van de Merwe K, Park J. Improving decision transparency in autonomous maritime collision avoidance. Journal of Marine Science and Technology. 2024Metadata
Show full item recordCollections
Copyright 2025 The Author(s)