An artificial intelligence (AI) expert has warned there is no evidence the technology can be controlled – and so should not be developed.
Dr Roman Yampolskiy conducted an extensive review of the software to discover how it may reshape society, and said it will not always be to our advantage.
‘We are facing an almost guaranteed event with potential to cause an existential catastrophe,’ said Dr Yampolskiy.
‘No wonder many consider this to be the most important problem humanity has ever faced. The outcome could be prosperity or extinction, and the fate of the universe hangs in the balance.’
He said humans’ ability to produce intelligent software far exceeds our ability to control AI – and that no advanced intelligent systems can ever be fully controlled.
‘Why do so many researchers assume that AI control problem is solvable?’ he said. ‘To the best of our knowledge, there is no evidence for that, no proof. Before embarking on a quest to build a controlled AI, it is important to show that the problem is solvable.
‘This, combined with statistics that show the development of AI superintelligence is an almost guaranteed event, show we should be supporting a significant AI safety effort.’
One problem put forward by Dr Yampolskiy is that, as AI becomes more intelligent, there will be an infinite number of safety issues. This will make it impossible to predict them all, and existing guard rails may not be enough.
He also added that AI cannot always explain why it has decided something – or humans may not always be able to understand its reasoning – which may make it harder to understand and prevent future issues.
‘If we grow accustomed to accepting AI’s answers without an explanation, essentially treating it as an Oracle system, we would not be able to tell if it begins providing wrong or manipulative answers,’ said Dr Yampolskiy, who conducted the review for his new book, AI: Unexplainable, Unpredictable, Uncontrollable.
However, one of the most concerning elements of AI is its increasing autonomy. As AI’s ability to think for itself increases, humans’ control over it decreases. So too does safety.
‘Less intelligent agents – people – can’t permanently control more intelligent agents (ASIs),’said Dr Yampolskiy. ‘This is not because we may fail to find a safe design for superintelligence in the vast space of all possible designs, it is because no such design is possible, it doesn’t exist.
‘Superintelligence is not rebelling, it is uncontrollable to begin with.’
To minimise the risks from AI, Dr Yampolskiy said users will need to accept reduced capability, and AI must have built-in ‘undo’ options in easy-to-understand human language.
‘Humanity is facing a choice,’ he said. ‘Do we become like babies, taken care of but not in control or do we reject having a helpful guardian but remain in charge and free?’