Artificial Intelligence? 

Screen Shot 2018-04-18 at 15.09.35.png


Yesterday, the House of Lords Artificial Intelligence Committee released a report: AI in the UK: ready, willing and able?
 
There’s a lot to unpack from its recommendations, but what concerns me is what was omitted. In the summary, it states: “Many of the hopes and the fears presently associated with AI are out of kilter with reality. While we have discussed the possibilities of a world without work, and the prospects of superintelligent machines which far surpass our own cognitive abilities, we believe the real opportunities and risks of AI are of a far more mundane, yet still pressing, nature.”
 
What sort of fantasist would believe that we have anything to fear from superintelligent machines? Well, the late, great Stephen Hawking, as well as Elon Musk and Bill Gates, for starters. In focusing on the mundane, the Lords might be missing the existential risk in the room.
 
A dozen AI experts also signed the Hawkins/Musk/Gates letter, and last year a robust survey of AI experts found that on average they believe AI will outperform humans in many activities in the next ten years: translating languages by 2024, writing high-school essays by 2026, driving a truck by 2027, working in retail by 2031, writing a bestselling book by 2049, and working as a surgeon by 2053. There are a few dozen experts who think there’s 100% chance of human-level AI before 2050 and on average researchers believe there's a 50% chance of AI outperforming humans in all tasks in 45 years.

High-level machine intelligence (HLMI), which is when unaided machines can accomplish every task better and more cheaply than human workers may take a while, but it’s short-sighted to dismiss it as not presenting “real opportunities and risks.” The Lords may have set it aside for another report, but if they ignore AI risks entirely they could be failing to understand the most brilliant and dangerous technology the world has ever known.
 
Nick Bostrom, director of the Future of Humanity Institute at Oxford University, has inspired celebrants and critics in thinking through the future impacts of AI. Bostrom believes AI presents an existential risk to humanity, and I think his ideas deserve to be taken seriously. In fairness to the Lords, he was a witness for the report, where he didn't go into the risks, but his Future of Humanity Institute provided written evidence that was explicit about long-term AI safety concerns. The Lords should follow up with the Institute's offer to support policy thinking around this issue.
 
The average AI expert isn’t a doomsayer. The survey cited above finds that most experts think HLMI will be positive, aren't discounting catastrophic risks entirely. When the numbers are crunched, 14% of experts believe that AI might be soon, superintelligent, and hostile.
 
Britain’s most esteemed Lords have got in wrong in the past. Lord Kelvin, the first British scientist to be elevated to the upper house, predicted that heavier-than-air flight was impossible eight years before the Wright brothers proved him wrong. Even Lord (Ernest) Rutherford was wrong about the significance of his own work. The father of nuclear physics, said in 1933: “Anyone who expects a source of power from the transformation of these atoms is talking moonshine.”
 
The House of Lords Artificial Intelligence Committee would do better to keep an open mind on the long-term impact of AI. Perhaps they could chat with their colleague Lord Martin Rees, astrophysicist and co-founder of Cambridge’s Centre for the Study of Existential Risk. As the former President of the Royal Society thinks: “We don’t know where the boundary lies between what may happen and what will remain science fiction.”