Brookings
·
Published
July 8, 2024
With AI, we need both competition and safety
Leans Left
Commentary
·
Artificial Intelligence
Share this article
Summary
- Tom Wheeler and Blair Levin at Brookings argue that the FTC and DOJ should investigate AI collaborations and transactions for antitrust concerns while simultaneously encouraging AI safety standards through industry cooperation.
- They propose a model that balances competition and AI safety, advocating supervised processes, market incentives, and regulatory oversight to ensure AI companies collaborate on safety without undermining competitive markets.
Overview:
This article was written by Tom Wheeler and Blair Levin at Brookings.
- The Federal Trade Commission (FTC) and Department of Justice (DOJ) are investigating AI collaborations for potential antitrust violations due to concerns over market concentration and competition.
- AI safety should be a priority alongside competition, suggesting collaborations to set safety standards without disincentivizing competitive practices.
Key Quotes:
- "Building the AI future around competition and safety should be a no-brainer."
- "AI may be new, but the responsibilities of AI companies to protect their users have been around for literally hundreds of years."
What They Discuss:
- The potential of AI to surpass human cognitive abilities in the near future and the consequent risks involved.
- The importance of creating uniformly applicable safety standards to prevent a "race to the bottom."
- Examples of effective industry-government collaborations, such as the American Medical Association's standards for doctors and the FINRA’s regulations in the financial industry.
- The necessity for transparency and ongoing oversight in ensuring AI safety standards.
- Historical precedents like the Cybersecurity Social Contract, which balanced collaboration and compliance with antitrust laws.
What They Recommend:
- Encourage collaboration between AI companies to establish and adhere to AI safety standards.
- Develop a model that evolves as technology advances and incentivizes companies to exceed baseline safety standards.
- Ensure transparency and oversight to enforce compliance and protect public welfare.
- Draw lessons from successful industry-government collaborations to create enforceable AI safety standards.
- Clarify government policy to support AI safety collaborations without impeding competition through an executive order or joint FTC/DOJ statement.
Key Takeaways:
- AI development must balance safety and competition to protect public interests while fostering innovation.
- Collaboration on AI safety is necessary and can coexist with competitive practices, as evidenced by historical regulatory examples.
- The government needs to adopt a supervisory rather than a dictatorial role in enforcing AI safety standards.
- Clear policies and collaborative frameworks are essential to achieve safe and competitive AI markets.
This is a brief overview of the article by Tom Wheeler and Blair Levin at Brookings. For complete insights, we recommend reading the full article.
Original Read Time
9 min
Organization
The Brookings Institution
Category
Israel-Gaza War
Political Ideology
Center Left