A new generation of artificial intelligence that provides a "confidence rating" of its answers could make algorithms more trustworthy and speed their rollout in safety-critical situations.

Scientists developing the technology called "uncertainty-aware AI" say it removes the risk of hallucinations - fictitious answers created by computer algorithms when they have incomplete or conflicting data.

The new AI assistant would instead provide its best assessment of a situation when faced with patchy evidence, but confirm that it wasn't sure.

A human operator would then be able to judge how much weight to put on the answer and seek additional information where necessary.

Image: Co-skipper Charlie Warhurst

The start-up digiLab, which devised the AI assistant called the Uncertainty Engine

See Full Page