This week, US president Joe Biden told an array of politicians and dignitaries at Leinster House that the world was at an “inflection point”.
“The choices we make today will literally determine the future or the history of the world for the next four to five decades,” he told the joint House of the Oireachtas. To emphasize his point, he pointed to artificial intelligence: “It has great promise and great concern.”
His underlining of artificial intelligence (AI) comes at a time when the emergence of ChatGPT has sparked a heated, intense and sometimes excessive debate.
US senator Chris Murphy last month tweeted a claim that ChatGPT taught itself to perform advanced chemistry. “Someone is coming,” he warned. “We are not ready”.
Consensus about what, exactly, is coming is hard to come by – but many commentators agree that the pace and scale of change brought about by the rapid development of AI, and the promise of more to come, very big.
Dr Seán Ó hÉigeartaigh, an Irish academic director of AI in the Futures and Responsibility program at the University of Cambridge said that the transformative potential of AI “is in the order of the internet – and I think there is potential in a 10-year time frame, if progress continues, we will be looking at the order of the industrial revolution”.
Ó hÉigeartaigh admits to being at the radical end of the spectrum, but his view is not unlike that of some senior Government figures. A senior source said it was similar to the advent of nuclear power. Another old man said that technology “is changing how we all live without us talking about it”.
As the AI debate gains currency, many questions are prompted about the impact on people’s digital and physical lives, their job security and the social, political and economic consequences of the rapid transformation of a predicted scale and speed difficult to understand.
“The real question is what happened in the last two months,” said Richard Browne, director of the National Cyber Security Center (NCSC), the State agency charged with scanning the digital horizon. for threats. “Everything about the policy arena here is challenged by the speed at which it’s happening.”
‘perfect storm’
Recent advances in AI have been made possible by what Dr Edward McDonnell called a “perfect storm”. McDonnell, director of CeADAR, Ireland’s national center for applied AI, says the availability of large amounts of data on which new systems can be “trained” comes at the same time as “amazing progress of the amount of computer power available”. He said it was “a pause in the process where there was a big leap forward”.
The potential benefits are huge: optimists envision a future in which almost all sectors of society benefit from massive productivity gains. “We should accept it and not condemn it,” said a senior Government source.
However, the caveats also stack up. The Financial Times this week published an article by an AI investor who pleaded that “we need to slow down the race to God-like AI”; a paper published by a US AI researcher warns that “natural selection favors AI over humans”. Some reports are shocking: Belgian newspapers published details of a man who died by suicide after interacting with a chatbot about climate change for weeks; US activists published screenshots of interactions on an AI-powered experimental Snapchat service that gave advice to a 13-year-old girl who was planning to have sex with a 31-year-old, and a child who trying to cover the bruises before visiting the protective services (in both cases, the adults played the part of the children asking for advice).
There is a growing school of thought that argues that the speed of recent developments means that optimism should be tempered by a realistic assessment of how life is changing, and how it should be. planning.
“I am enthusiastic and concerned,” said Ó hÉigeartaigh of Cambridge University. “Things are moving very fast at the moment and we are nowhere near ready for its impact. He believes that AI will alleviate many of the “difficulties of modern life” and will greatly help professionals. However, it can also do a lot jobs that are less, or less, economically feasible.
Different sectors need to be involved “to ensure that AI is not just something that happens to them but that they shape its management as it affects their sectors”.
“We as a society, including those of us who think about the fundamental impact of these technologies, don’t have enough time and struggle to keep up with what’s happening.”
Guidelines
Last week, Minister of State for cybersecurity Ossian Smyth said he had asked the NCSC to develop public-facing guidelines for a world with new or high risks arising from the proliferation of AI technology. He says “a wave” of disruption could occur, including in fraud – where previously one-to-one scams could proliferate, backed by sophisticated AI.
Another is within politics, where so-called ‘networks of influence’ can change the debate by using AI-powered bot armies, or by rapidly creating and disseminating deep fakes. and other forms of misinformation. “It’s about scale and volume,” said Joseph Stephens, NCSC’s head of engagement. “It’s really accelerating the threat picture that’s already in place.”
There are fears that politics has a recent track record of being vulnerable to new threats. Mark Brakel is the policy director of the Future of Life Institute, a non-profit that aims to reduce the risk from technology – partly funded by Elon Musk in history, and also benefiting from significant support from Vitalik Buterin, founder of the cryptocurrency ethereum. He says social media should be considered humanity’s first contact on the scale of “really simple AI systems” that are becoming more and more sophisticated. “We need to learn the lessons from how badly we did in regulating social media to get ahead of the curve this time,” he said – warning that social media was greeted with great enthusiasm and some regulatory efforts to “carve out the edges – and a few years later we woke up to a broken political system”.
REGULATIONS
There will inevitably be calls for more regulation of AI. In Washington, Senate leader Chuck Schumer is taking early steps towards regulating AI, while Brussels is a bit ahead of the game and has been working on a draft AI Act for two years – this is now scrambling to accommodate these efforts to the latest developments.
Ireland will have an AI policy from 2021, with the Department of Enterprise having overall responsibility. Other parts of the State apparatus are also involved – the NCSC has been factoring AI into its threat assessments for years.
It is understood that security officials are in contact with multinationals based here and are in the process of creating additional guidance for State agencies, while a midterm review of the current cybersecurity strategy is due in weeks. expected to place more emphasis on AI.
While the threat is clear, security officials also say it has not been shown as a primary risk in assessments shared by Ireland with international actors or friendly governments. It has yet to crack the ever-circulating lists of the most pressing threats facing the State.
The NCSC expects that the threat, if it occurs, will not come in the first instance from the technology itself – but from its deployment by a bad actor such as a criminal organization or a rogue state. “The real challenge is those countries and entities that do not comply with international law,” said NCSC director Richard Browne. The NCSC also argues that AI tools allow it to be more effective, even if they can empower bad actors.
“It’s a bit of an arms race between the two sides. And, like all new technologies, it’s a double-edged sword,” Stephens said.
The real challenge, Browne said – while stressing that there is no suggestion this is happening now – is “what happens when tools and technology evolve at an exponential rate, and that the things that are used to protect of networks – the processes, the tools – become redundant. , essentially, and you are caught in a technological revolution rather than evolution”.
Call for a stop
The Future of Life Institute last month published an open letter calling for an immediate halt for at least six months in the training of AI systems, signed by Musk, Apple co-founder Steve Wozniak and others. Ó hÉigeartaigh is one of them – he says this is an important area for debate to emerge. While he and many others are skeptical about the imminent threat of an “artificial general intelligence”, equivalent to the level of human intelligence, he says the speed of recent advances means that “these are the questions that now need to be given a serious attention”
“The best shot we have now is the EU AI Act, the regulators being adequately funded and resourced,” said Ireland’s AI Ambassador Patricia Scanlon – who describes herself as an advocate for ethical AI. “I’m very glad it came out when it did because it shines a light on the loopholes in the law,” he said. In the face of rapid change, Scanlon is wary of firm predictions – “Anyone who comes out and says they know exactly what’s going to happen in the next five or 10 years, I don’t think so. they are honest,” he said. But warns that there is a window, however, to prepare.
“I get nervous if I feel that people don’t take it seriously, let it run or turn to lobbying it”.
“I think we have time and I hope that we all take this point seriously to get it later”.
Minister of State Dara Calleary, who oversees the policy, said EU legislation would “put in place guardrails for the use of AI” and that Ireland was “actively engaged” with Brussels in developing the policy.
Smyth, the minister for cybersecurity, warned more bluntly: “This is a very powerful tool that we need to spend time studying and mastering – before it is mastered by us.”