As an Artificial Intelligence researcher, I have always felt that the worst part of AI is its role in spreading lies. AI-amplification of lies in Myanmar reportedly contributed to the Rohingya massacre, the spread of COVID-19 and vaccine misinformation likely contributed to hundreds of thousands of preventable deaths and election misinformation undermined the our democracy and have a part in the January 6, 2021 insurrection. All this is possible because people have turned algorithms into weapons, manipulating them to spread harmful information on platforms that claim to be neutral. These algorithms are all owned by companies, and they are not regulated. And so far, none of the companies have admitted any liability. Obviously, no one should feel guilty.
If the federal government doesn’t start regulating AI companies, it will only get worse. Billions of dollars are pouring into AI technology that produces realistic images and text, without good control over who does what. This greatly facilitates the creation of fake news, fake violence, fake extremist articles, non-consensual fake nudity and even fake “scientific” articles that look real on the surface. The venture capital firms that invested in this technology likened it to the early launch of the internet. And as we all know, it is much easier to spread bad lies than to spread the truth. Is this like the beginning of the internet? Or is it like launching a nuclear bomb on truth?
AI startups say that by making this technology public, they are “democratizing AI.” It’s hard to believe that from companies that have the potential to make billions by making people believe in it. If they were about to become the victim of a massacre stemming from AI-generated misinformation, or even a victim of AI-enhanced bullying, they might feel differently. Misinformation is not innocent – it is a major cause of wars (think WWII or Vietnam), although most people are not familiar with the connection.
There are things we can do now to address critical problems. We need regulations regarding the use and training of specific types of AI technology
Let’s start regulating facial recognition technology (FRT) — that is, unless you don’t mind being recognized by AI and then kicked out of Radio City Music Hall because of ongoing litigation involving your employer. Users of FRT must obtain a license or certification to use it or develop it, which includes training for all users and developers.
We need to think about how to reduce the spread of more harmful misinformation; an easy solution to this is to make social media companies responsible for posted content, just like any other publisher. Some countries have such laws, but the US does not.
We also need to enforce existing laws around monopolistic practice, which will allow users to choose social media platforms. If you can’t quickly download your data from your social media platform and upload it to a new one, then the social media company is holding your data hostage, which can be a monopolistic one. . More competition will allow users to choose content moderation platforms. Not everyone should support companies that can host and perpetuate real harm online and in the real world, without much effort to counter it. We don’t all have to be subject to the same algorithmic attention-seeking behavior.
We need to force companies to remove all child abuse content. It’s a shame that the AI can easily find this content but not be able to extract it. What’s even more embarrassing is that companies apparently don’t always remove them when they’re notified or delay efforts to do so.
It is critical that transparent models are used for high-stakes decisions that greatly affect people’s lives. I’ve written a lot about this, pointing out that for long decisions, interpretable models perform like black box models, even on difficult benchmark datasets. My lab has been instrumental in developing such interpretable machine learning models, some of which are used in high-stakes decisions, even in intensive care units.
Finally, we need to know how to regulate any new and potentially dangerous technology before it causes harm on a wide scale. The New York Times op-ed by Sen. Ted Lieu (D-Calif.) has sensationally proposed creating a government agency for AI — which is a good idea.
This technology is like a runaway train that we are chasing on foot. With little incentive to do good, technology companies seem to ignore how their products impact — or even disrupt — society. It appears that they are making too much money to care, so we, the citizens, need to step up and demand regulation. If not, we are likely to be in a dangerous flow of misinformation.
Cynthia Rudin is a professor of computer science; electrical and computer engineering; statistical science; as well as biostatistics and bioinformatics at Duke University, where he directs the Interpretable Machine Learning Lab.