“Every eighteen months, the minimum IQ required to destroy the world decreases by one point,” AI theorist Eliezer Yudkowsky and co-founder of the Berkeley-based Machine Intelligence Research Institute emphasized in an obvious improvisation of Moore’s Law. While the level of existential risk posed by AI, has been a subject of renewed debate since the explosive debut of OpenAI’s ChatGPT, may seem excessive at the moment, policymakers across jurisdictions are raising regulatory scrutiny of generative AI tools. The concerns flagged fall into three broad heads: privacy, systemic bias and infringement of intellectual property rights.
The policy response is also different, as the European Union has taken a predictably stronger stance by proposing to bring in a new AI Act that separates artificial intelligence according to scenarios of use case, which is based largely on level of intrusion and risk; the UK is at the other end of the spectrum, with a decidedly ‘light-touch’ approach that aims to promote, and not stifle, innovation in this new field. The US approach falls somewhere in between, with Washington now setting the stage for defining an AI regulatory playbook by starting public consultations earlier this month on how to regulate AI tools. artificial intelligence. This supposedly builds on a move by the White House Office of Science and Technology Policy last October to unveil a Blueprint for an AI Bill of Rights. China, too, has released its own set of measures to regulate AI.
India has said it is not considering any legislation to regulate the artificial intelligence sector, with Union IT Minister Ashwini Vaishnaw admitting that although AI “has ethical concerns and associated risks” , it has proven to be an enabler of the digital and innovation ecosystem.
“NITI Aayog has published a series of papers on the topic of Responsible AI for All. However, the government is not thinking of bringing a law or regulating the growth of artificial intelligence in the country,” he said in a written reply to the Lok Sabha in this Budget Session.
The American Approach
The US Department of Commerce, on April 11, took the most decisive step in addressing regulatory uncertainty in this space when it asked the public to weigh in on how it should create rules and laws to ensure that systems of AI works as advertised. The agency has flagged the possibility of floating an auditing system to check whether AI systems include harmful bias or distort communications to spread false information or disinformation.
According to Alan Davidson, an assistant secretary at the US Department of Commerce, new assessments and protocols may be needed to ensure that AI systems operate without negative consequences, such as audits. Financially verifying the accuracy of business statements. One reason for all this US policy action is an October 2022 move by the White House Office of Science and Technology Policy (OSTP), which published a Blueprint for an AI Bill of Rights that, among other things, shares a nonbinding roadmap for the responsible use of AI. The 76-page document outlines five core principles to guide the effective development of AI systems, with particular attention to the unintended consequences of civil and human rights abuses. . The broad principles are:
Safe and effective systems: Protecting users from unsafe or ineffective systems
Algorithmic discrimination protections: Users should not face discrimination by algorithms
Data privacy: Protect users from abusive data practices through built-in protections and have agency over how their data is used
Information and explanation: Users know that an automated system is being used and understand how and why it contributes to outcomes that affect them.
Alternative options: Users can choose and have access to someone who can quickly figure out and solve the problems they encounter.
The blueprint clearly states that it sets out to “help guide the design, use, and deployment of automated systems to protect the American Public”, with the principles being non-regulatory and non-binding: A “Blueprint,” as announced, and not yet an enforceable “Bill of Rights” with legislative safeguards.
The document includes several examples of AI use cases that the White House OSTP considers “problematic” and goes on to explain that it should only be used in automated systems that “have the potential to significantly affect the rights, opportunities, or access of the American public to critical resources or services, generally excluding many industrial and/or operational applications of AI”. The blueprint expands on examples of the use of AI in lending , human resources, surveillance and other areas, which will also find a counterpart in the ‘high-risk’ use case framework of the proposed EU AI Act, according to a World Economic Forum synopsis of the document.
But analysts point to gaps. Brookings’ Nicol Turner Lee and Jack Malamud say that while identifying and mitigating the intended and unintended consequences of AI risks has been widely known for a long time, how the blueprint will facilitate reprimand such complaints have not yet been determined. “Furthermore, questions remain as to whether the non-binding document will prompt the necessary congressional action to manage this unregulated space,” they said in a December paper titled Opportunities and Challenges blind spot in the White House’s blueprint for an AI Bill of Rights.
The debate about regulation is intensifying in the course of developments around the slow launch of ChatGPT, the chatbot from San Francisco-based OpenAI that is estimated to have acquired more than 100 million users and Google continues its Bard chatbot, while Chinese companies followed Baidu’s Ernie Bot and Alibaba in announcing plans to release a bot for internal use.
Stop developing AI
Tech leaders Elon Musk, Steve Wozniak (Apple co-founder) and more than 15,000 others reacted by calling for a six-month freeze on AI development, saying that labs are in an “out-of-control race” to develop systems that no one else owns. can control completely. They also said labs and independent experts should work together to implement a set of shared safety protocols. Yudkowsky, too, is among those calling for a global moratorium on AI development. But that call has divided opinions further.
“The need for a stop working on models that are more advanced than GPT-4: It is regressive where we monitor a technology that can prove harmful to society. But the truth is that any thing can be harmful if left unattended and uncontrolled. Instead of calling for a stop, one should think about monetization, regulation, and careful use of LLMS and related technologies,” Anuj Kapoor, an Assistant Professor of Quantitative Marketing at IIM Ahmedabad, told The Indian Express.
While the US has seen a flurry of policy activity, there is little optimism about how much progress Washington will make on this issue, as the US Congress has repeatedly been urged to pass laws that put the limits to the powers of Big Tech, but these attempts did not do much because of the political division of the legislators.
The EU seems to have erred on the side of caution, as Italy set the stage by becoming the first major Western country to ban ChatGPT out of privacy concerns. The 27-member bloc became the first mover by initiating measures to regulate AI in 2018, and the EU AI Act, due in 2024, is therefore an expected document.
China is developing its regulatory regime for the use of AI. Earlier this month, the country’s federal internet regulator released a 20-point draft to regulate generative AI services, including mandates to ensure accuracy and privacy, avoid discrimination and guarantee intellectual property rights.
The draft, which has been published for public feedback and is likely to be implemented later this year, also requires AI providers to clearly label AI-generated content, establish a mechanism to handle user complaints and go through a security check before going public. The content produced by AI must also “reflect the core values of socialism” and not contain any subversion of state power that could lead to the overthrow of China’s socialist system, according to the draft cited by Forbes.
In fact, the Chinese regulations were published the same morning the US Commerce Department launched a request for comments on AI accountability measures.