A Stanford University report says that ‘incidents and controversies’ related to AI have increased 26-fold in a decade.
More than a third of researchers believe that artificial intelligence (AI) could lead to a “nuclear-level catastrophe”, according to a survey by Stanford University, highlighting the sector’s concerns about the risks posed by the rapid technological advancement.
The survey is one of the findings highlighted in the 2023 AI Index Report, released by the Stanford Institute for Human-Centered Artificial Intelligence, which examines the latest developments, risks and opportunities in the emerging field of AI.
“These systems demonstrate query-answering capabilities, and the creation of text, images, and code that were unthinkable a decade ago, and they surpass the state of the art in many ways, old and new. or,” said the report’s authors.
“However, they are prone to imagination, often biased, and can be manipulated to serve nefarious purposes, highlighting the complex ethical challenges associated with their deployment.”
The report, released earlier this month, comes amid growing calls for AI regulation following controversies ranging from chatbot-linked suicides to deep-seated videos of Ukrainian President Volodymyr Zelenskyy that show surrendered to invading Russian forces.
Last month, Elon Musk and Apple co-founder Steve Wozniak were among 1,300 signatories to an open letter calling for a six-month moratorium on training AI systems beyond Open AI chatbot level GPT-4 as “powerful AI systems should be developed only. once we are confident that their effects will be positive and their risks manageable”.
In the survey highlighted in the 2023 AI Index Report, 36 percent of researchers said that decisions made by AI could lead to a nuclear-level disaster, while 73 percent said that they would soon be cause “revolutionary social change”.
The survey heard from 327 experts in natural language processing, a branch of computer science that is key to the development of chatbots like GPT-4, between May and June last year, before the release of Open AI’s ChatGPT in November took the tech world by storm.
In an IPSOS poll of the general public, which is also highlighted in the index, Americans appear more wary of AI, with only 35 percent agreeing that “products and services that use AI have many benefits rather than disabilities”, compared to 78 percent of Chinese respondents. , 76 percent of respondents in Saudi Arabia, and 71 percent of respondents in India.
The Stanford report also noted that the number of “incidents and controversies” related to AI has increased 26 times over the past decade.
Government measures to control and regulate AI are increasing.
China’s Cyberspace Administration this week announced draft regulations for generative AI, the technology behind GPT-4 and domestic rivals such as Alibaba’s Tongyi Qianwen and Baidu’s ERNIE, to ensure that the technology complies with the “core value of socialism” and does not harm the government.
The European Union has proposed an “Artificial Intelligence Act” to govern which types of AI are acceptable for use and which should be banned.
US public wariness about AI has yet to translate into federal regulations, but the Biden administration this week announced the launch of public consultations on how to ensure that “AI systems are legal, effective, ethical, safe, and if unreliable”.