The Titanic sunk 113 years ago on April 14-15, after hitting an iceberg, with human error likely causing the ship to stray into those dangerous waters. Today, autonomous systems built on artificial intelligence can help ships avoid such accidents, but could such a system explain to the captain why it was maneuvering a certain way?
That’s the idea behind explainable AI, which should help human actors trust autonomous systems more. Researchers from Osaka Metropolitan University’s Graduate School of Engineering have developed an explainable AI model for ships that quantifies the collision risk for all vessels in a given area, an important feature as key sea-lanes have become ever more congested.
Graduate student Hitoshi Yoshioka and Professor Hirotada Hashimoto created the AI model so that it explains the basis for its decisions and the intention behind actions using numerical values for collision risk.
“By being able to explain the basis for the judgments and behavioral intentions of AI-based autonomous ship navigation, I think we can earn the trust of maritime workers,” Professor Hashimoto stated. “I also believe that this research can contribute to the realization of unmanned ships.”
The findings were published in
Applied Ocean Research.
Funding
This work was conducted in collaboration with ClassNK and was supported by JST SPRING Grant Number JPMJSP2139 and JSPS KAKENHI Grant Number 23H01627.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
###
About OMU
Established in Osaka as one of the largest public universities in Japan, Osaka Metropolitan University is committed to shaping the future of society through “Convergence of Knowledge” and the promotion of world-class research. For more research news, visit
https://www.omu.ac.jp/en/ and follow us on social media:
X,
Facebook,
Instagram,
LinkedIn.