All new technologies have risks and benefits. How can AI be regulated to mitigate its misuse while reaping its bounty?
At the NeurIPS meeting in Montreal in 2018, Edward Felton gave a plenary talk on how the regulatory machinery works in Washington. [1] He warned us that if we don’t regulate ourselves, the government will; when that happens, it is vital to be at the table where the laws are made. The danger is that overly restrictive laws can choke innovation, and even benign laws can have unintended consequences.
Self-Regulation
Figure 1. At the Asilomar meeting, molecular biologists self-imposed stringent containment procedures to prevent pathogens from escaping into the wild. [2]
The shortcomings of deep-learning AI systems are well-known, including concerns about privacy, bias, and racism, and the community has been working to correct these defects. LLMs have added to these problems with hallucinations and broader ethical concerns. All new technologies undergo a long process of updating and shaping to prevent the misuse of the technologies in society.
In the 1970s, biologists invented recombinant gene technology to manipulate and isolate DNA segments of interest, combine DNA from different species, and create new functions. Researchers realized that this technology gave them immense power to modify DNA and even create new life forms with the potential for improving crops and curing disease, but also with the danger of unleashing supervirulent and cancer-carrying viruses. Molecular biologists did not wait for regulatory agencies to curtail potentially dangerous experiments. They organized an International Congress on Recombinant DNA Molecules, held at the Asilomar Conference Center in Pacific Grove, California, in February 1975 (Figure 1). [3]
There was disagreement among the scientists about how to restrict experiments. After vigorous debate, they recommended several levels of containment for experiments with different levels of risk. Low-risk containment was appropriate when recombinant DNA could not significantly increase pathogenicity or disrupt ecologies. High-risk containment was needed when the modified organism could lead to severe consequences and pose a serious biohazard to laboratory personnel or the public. These rules would allow research to continue but under stringent guidelines.
These recommendations became standard practice by the scientific community. They made it possible for scientists to perform experiments and advance science safely. The Institutional Review Boards (IRBs) at universities and companies review proposed experiments to ensure they comply with regulations, meet accepted ethical standards, follow institutional policies, and adequately protect research participants. These safety policies are amended as discoveries are made and even more powerful gene manipulation techniques are invented.
There is a thriving biotechnology industry today, and many patients have benefited. For example, by tinkering with genes, scientists discovered that cancer was a genetic disease with diverse pathways leading to different cancer types. Once a pathway is identified, specific biochemical reactions can be devised to target and suppress proliferation. For example, Tony Hunter, my colleague at the Salk Institute, discovered a new class of enzymes in cells that led to Gleevec. This drug can check the progression of a form of leukemia. Immunotherapy has cured cancers that were once death sentences, such as melanoma skin cancer and non-small cell lung cancer.
Self-regulation seems like a sensible way for the AI community to move forward, but not easily given the diverse views in the community and powerful self-interests in the corporate sector. The time is right for scientists and engineers who understand AI technology to work with policy experts to develop a flexible regulatory framework.
Government Regulation
In 2021, the European Union proposed a European law on artificial intelligence. The AI Act became the first law on AI by a major regulator when the European Parliament passed it on March 13, 2024. The document is 105 pages long, with 89 sections in the preamble, 44 definitions, 4 forbidden practices, and 85 Articles spelling out rules and penalties, all written in legalese. [4] For example, it limits resume-scanning tools that rank job applicants. This approach looks well-meaning but premature. [5] AI is moving so quickly that these proposed laws are already obsolete. For example, ChatGPT and generative AI models were not mentioned when the law was drafted. AI is evolving much faster than regulatory machinery.
On May 16, 2023, Sam Altman, CEO of OpenAI, testified before Congress for three hours on the need to regulate AI (Figure.2). [6] “I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that,” he said. “We want to work with the government to prevent that from happening.” He proposed a government agency that investigates companies and issues licenses for them to develop LLMs, including safety regulations and tests before they can be released to the public, just as the Food and Drug Administration (FDA) regulates drug clinical trials. “We believe that the benefits of the tools we have deployed so far vastly outweigh the risks, but ensuring their safety is vital to our work,” Altman said.
Figure 2 Sam Altman, CEO of OpenAI, testified before Congress on May 16, 2023. (New York Times).
Altman’s testimony before Congress was cordial, contrasting the antagonistic interactions previously experienced by Mark Zuckerberg, Jeff Bezos, and other high-tech CEOs. Altman also had dinner with dozens of House members and met individually with several senators before the hearing, where he outlined a loose regulatory roadmap for managing the rapidly developing AI landscape that could significantly impact the economy. On June 9, 2023, Altman made a high-profile trip to South Korea and called for the coordinated international regulation of generative artificial intelligence.
Altman’s full-court press for regulations puzzled me. It reminded me of the strange effects of Toxoplasma gondii, a brain parasite, on rodents: once infected, rodents lose their fear of cats and become more likely to be eaten. [7] What is the advantage of this perversion for the parasite? The parasite can only reproduce in carnivores, and cats can spread the parasite far and wide.
Another possible explanation for why Altman offered himself up is that stiff regulations would favor the largest high-tech companies that can afford the stringent testing, comparable to the hundreds of millions of dollars that clinical trials cost pharmaceutical companies. Regulations lead to lawsuits that favor high-tech companies with their deep staff lawyers. Big Pharma companies buy small biotechnology companies that can’t afford phase 3 clinical trials. Altman was proposing the same model for the AI industry to Congress.
On November 17, 2023, Ilya Sutskever, the chief scientist and a member of the board of trustees, announced that Sam Altman was fired for being “not consistently candid.” The board that supported this coup was heavily skewed toward those who thought Altman was not prioritizing dangers posed by AI and was moving too fast. When investors and employees pushed back, Altman was reinstated four days later. [8] This is a microcosm of the AI debate between those who want to slow down and those who want to move forward. The new board now resembles boards in other high-tech companies, whose goals are aligned with those of its investors. This dramatic boardroom confrontation is a harbinger of how AI might play out in the government, which needs to balance safety with viability.
On October 30, 2023, the White House issued an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Figure 3) [9]. This 117-page document requires companies to provide the government with safety testing results and other proprietary information before making large AI models available to the public. Government agencies will develop and enforce standards. The Executive Order does not prevent using copyrighted data to train LLMs.
Figure 3. The Office of the President issued this Executive Order to regulate AI in the United States in 2023.
Another Way to Regulate AI
Regulation of AI has emphasized reducing risks, but the risks must be balanced against potential benefits. The European Union’s heavy-handed attempt to regulate AI by micromanaging the risks is an excellent way to stillbirth the benefits. The United States has taken a light-handed approach, giving AI an open road to expand and diversify while minimizing risks. Which is the better approach? Perhaps there is another way.
Jason Furman recently wrote “How to Regulate AI Without Stifling Innovation.” [10] He outlined six principles, paraphrased here, to ensure that AI can safely grow while balancing risks with benefits:
1) New regulations should be introduced not at the federal level but at the level of specific applications. We already have agencies regulating drugs, airlines, and environmental safety. AI will impact all these areas, so let these agencies decide how best to regulate each application. AI agency expertise will have to expand for this to be successful. Our educational systems will have to shift to high gear to train workers and regulators to handle a tidal wave of AI applications.
2) Regulations should not favor high-tech companies but should encourage innovation by researchers and startups. There are 100,000 AI startups worldwide, minnows that need protection from hungry predators. Here are some precedents for how gatekeepers have stifled competition: Medieval guilds, teachers’ unions, and centralized licensing bodies. Do not let big companies or big lobbyists set rules that exclude fair competition.
3) How are existing regulations hindering progress? AI needs more data centers to meet demand. However, local laws are making it difficult for companies to obtain the power to run them. States have introduced jumbles of regulations that are counterproductive. Here is where federal leadership could ensure a level playing field for AI growth with minimal intervention or cost.
4) Regulation should not delay the adoption of AI until it has been proven to be absolutely safe. Strong regulations can delay the incremental improvement needed to achieve safe AI or even block the path. Lost benefits include increased cancer deaths due to delayed drug discovery, poor educational outcomes by delays in personal digital tutors, excess car accidents by delays in self-driving cars, and delays in decreasing the carbon footprint because of a slowdown in discovering better battery storage for sustainable power on the grid.
5) Don’t compare AI with perfection, but with imperfect humans. The number of crashes caused by self-driving cars should be compared with the number of crashes caused by humans. There are currently over 40,000 deaths on highways each year, an acceptable loss that balances the costs and benefits to society. How do biases in AI compare with human biases? Which will be easier to fix? AI is improving much faster than humans.
6) Regulation is not the answer to all problems. Jobs are already changing. AI can make workers more productive. Don’t ban AI from industry and classrooms. Invest instead in training the workforce with new skills. Companies and schools will need help adapting their procedures and curriculum. Government incentives and interventions could jump start the arrival of future benefits.
These are sensible principles that can help guide us toward a balance between future risks and benefits of AI.
[1] The video can be seen at https://videoken.com/embed/bf-E2oVjI9M.
[2] https://www.nature.com/articles/455290a Art by David Parkins. Sydney Brenner with the bushy eyebrows is sitting in the middle. https://dnalc.cshl.edu/view/15284-The-media-at-Asilomar-Sydney-Brenner.html Who can identify the other delegates?
[3] Paul Berg, “Asilomar 1975: DNA Modification Secured,” Nature 455 (2008): 290–291, https://www.nature.com/articles/455290a.
[4] You can download the AI Act from https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206.
[5] An early draft of the AI Act required sourcing for all data used to train the model. This was removed after Mistral, a French AI startup, lobbied with political support from President Macron’s office (“Meet the French startup hoping to take on OpenAI, The Economist,” March 2, 2024). Mistral was later investigated by the European Commission when it formed a strategic partnership with Microsoft (https://www.reuters.com/technology/microsofts-deal-with-mistral-ai-faces-eu-scrutiny-2024-02-27/).
[6] Cecilia Kang, “OpenAI’s Sam Altman Urges A.I. Regulation in Senate Hearing,” New York Times, May 16, 2023.
[7] Kelly Servick, “Brain parasite may strip away rodents' fear of predators—not just of cats,” January 14 2020 (https://www.science.org/content/article/brain-parasite-may-strip-away-rodents-fear-predators-not-just-cats)
[(9] The White House, Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, October 30, 2023, https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/.
[10] Furman, J., How to Regulate AI Without Stifling Innovation, Wall Street Journal, November 21, 2024.
Thank you Professor for this detailed overview. This is an important discussion, considering the current rapid pace of AI development.
I am now much more informed about the nuances regarding how AI should be regulated. Will definitely share this with my friends.