Why perceptions of AI matter
The UK government has made improving and boosting research and innovation in both the public and private sectors a priority. Boris Johnson has committed to doubling the public R&D budget over 5 years, on the way to meeting a pledge of spending 2.4% of GDP on R&D by 2027. And investment in AI is an important element of these plans.
The 2019 AI Sector Deal set out a range of policies designed to stimulate and facilitate sector-wide growth, as well as to ‘promote the adoption and use of AI in the UK’. However, the longer-term sustainability of the jobs and innovation boom implied by the promise of up to £603 million in additional funding, depends in part on the public’s understanding and acceptance of AI. Do the UK public view AI as a beneficial and productivity-boosting tool, or a suspicious mechanism of surveillance and control?
The rise of the machines
Research commissioned by Zinc Network suggests that positive perceptions of AI are certainly not guaranteed. Our poll found that 66% of the British public associate the term AI with the embodied forms of artificial intelligence most often found in popular culture, such as The Terminator, HAL from 2001: A Space Odyssey or the Machines from The Matrix.
Most commonly identified in the poll was Alexa, or other voice assistants, of which 52% of respondents regarded as a prominent form of AI. This demonstrates that the perception of AI as an embodied and autonomous entity, with which a person can interact as if it were another intelligent creature, has started to transcend popular culture and is the primary framework by which the British public now understands AI.
If not sufficiently considered, this may present a challenge to the government’s ambitions. An RSA study on AI narratives – i.e. how everyone talks about AI – notes that the most prominent narratives place ‘an over-emphasis on humanoid representations’, encouraging false, exaggerated or unnecessarily fearful expectations.
Perceptions rooted in villainous sci-fi incarnations don’t lend themselves to a nationwide welcoming of the deeper integration of AI into industry, government and wider society. However, this is not an insurmountable hurdle; OFCOM estimates that as of February 2019 at least one in five UK homes owned an Alexa, Siri or smart speaker or voice assistant, the most recognised form of AI according to Zinc’s research.
The characteristics associated with AI technology are driven by cultural circumstance. As highlighted by the RSA study, Japanese AI narratives often portray embodied AI as friendly and helpful, in contrast to the English-speaking Western narratives outlined above. So, it may be possible to learn from other contexts and cultures and re-orient the public’s key AI references and present alternative narratives to the prevailing concerns.
It is also worth considering that a broad understanding of AI as potentially dangerous and worthy of rigorous scrutiny is not without benefit. A public that is sceptical and concerned about the future of AI could work as a useful check on full-steam-ahead government ambition.
Not humanoid after all
For many AI technologies, it may be possible to navigate these associations entirely. The focus of respondents on embodied forms of AI demonstrates that widely used services like Google Maps, Uber and Netflix are not widely perceived as AI-related technologies by the British public. As such, their usage has been slowly normalised across the last few years in the most commonplace parts of everyday life, without much public awareness.
This presents both an opportunity and a risk; if or when perceptions of AI evolve, it is possible that this will promote positive engagement with and acceptance of it. However, it is also feasible that this will provoke some backlash and engender mistrust. There is a fine line between perceptions of reassurance and trickery; effective communication is therefore crucial.
As the integration of AI into society becomes deeper and more widespread, the narratives with which people understand AI will have an increasing impact. If we cross our fingers and hope that fears of The Terminator’s menacing ‘skynet’ future will go away, then AI is not going to become the innovation and productivity powerhouse that many hope it will.
By recognising the public’s understanding of AI and working to pre-emptively remedy potential points of concern, we can ensure that its potential is maximised in a way that works for everyone.
Notes on method
Fieldwork was conducted on behalf of Zinc Network by Populus, using an online methodology. Questions were asked to a sample of 1,093 people, weighted to be nationally representative of age, gender and region.
The fieldwork took place on June 22nd and June 23rd 2020.
The margin of error at a total level is ± 3 percentage points. All differences against the total that have been flagged are statistically significant.